Dec 08 19:29:07 crc systemd[1]: Starting Kubernetes Kubelet... Dec 08 19:29:07 crc kubenswrapper[5118]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 19:29:07 crc kubenswrapper[5118]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 08 19:29:07 crc kubenswrapper[5118]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 19:29:07 crc kubenswrapper[5118]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 19:29:07 crc kubenswrapper[5118]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 08 19:29:07 crc kubenswrapper[5118]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.889631 5118 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893104 5118 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893124 5118 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893129 5118 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893132 5118 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893136 5118 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893140 5118 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893144 5118 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893148 5118 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893152 5118 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893156 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893159 5118 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893163 5118 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893166 5118 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893170 5118 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893174 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893178 5118 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893185 5118 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893189 5118 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893194 5118 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893198 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893203 5118 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893207 5118 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893211 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893214 5118 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893218 5118 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893221 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893225 5118 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893228 5118 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893231 5118 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893234 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893238 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893242 5118 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893246 5118 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893249 5118 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893252 5118 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893256 5118 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893259 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893262 5118 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893265 5118 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893269 5118 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893272 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893275 5118 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893278 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893281 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893284 5118 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893287 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893295 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893302 5118 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893307 5118 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893311 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893315 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893319 5118 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893322 5118 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893326 5118 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893329 5118 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893332 5118 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893335 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893340 5118 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893344 5118 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893348 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893352 5118 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893361 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893366 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893370 5118 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893374 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893378 5118 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893382 5118 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893386 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893390 5118 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893393 5118 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893397 5118 feature_gate.go:328] unrecognized feature gate: Example Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893400 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893403 5118 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893407 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893410 5118 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893413 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893416 5118 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893419 5118 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893425 5118 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893429 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893432 5118 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893435 5118 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893439 5118 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893442 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893446 5118 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893452 5118 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893890 5118 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893896 5118 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893900 5118 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893903 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893907 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893910 5118 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893913 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893917 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893920 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893924 5118 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893927 5118 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893932 5118 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893935 5118 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893939 5118 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893942 5118 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893945 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893948 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893952 5118 feature_gate.go:328] unrecognized feature gate: Example Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893955 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893958 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893961 5118 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893965 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893968 5118 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893972 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893978 5118 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893982 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893985 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893988 5118 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893993 5118 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.893996 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894000 5118 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894003 5118 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894006 5118 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894010 5118 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894013 5118 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894017 5118 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894020 5118 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894023 5118 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894026 5118 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894030 5118 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894034 5118 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894037 5118 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894040 5118 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894043 5118 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894046 5118 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894050 5118 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894053 5118 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894056 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894059 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894062 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894065 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894069 5118 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894072 5118 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894075 5118 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894078 5118 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894081 5118 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894087 5118 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894090 5118 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894093 5118 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894096 5118 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894100 5118 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894103 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894106 5118 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894109 5118 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894112 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894115 5118 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894118 5118 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894121 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894124 5118 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894128 5118 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894131 5118 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894135 5118 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894138 5118 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894147 5118 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894151 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894154 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894157 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894162 5118 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894166 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894169 5118 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894173 5118 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894176 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894180 5118 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894184 5118 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894187 5118 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.894190 5118 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894610 5118 flags.go:64] FLAG: --address="0.0.0.0" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894623 5118 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894631 5118 flags.go:64] FLAG: --anonymous-auth="true" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894636 5118 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894644 5118 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894648 5118 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894654 5118 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894659 5118 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894663 5118 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894667 5118 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894671 5118 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894676 5118 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894680 5118 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894705 5118 flags.go:64] FLAG: --cgroup-root="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894709 5118 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894713 5118 flags.go:64] FLAG: --client-ca-file="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894717 5118 flags.go:64] FLAG: --cloud-config="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894721 5118 flags.go:64] FLAG: --cloud-provider="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894725 5118 flags.go:64] FLAG: --cluster-dns="[]" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894731 5118 flags.go:64] FLAG: --cluster-domain="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894735 5118 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894740 5118 flags.go:64] FLAG: --config-dir="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894744 5118 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894748 5118 flags.go:64] FLAG: --container-log-max-files="5" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894753 5118 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894757 5118 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894761 5118 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894765 5118 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894769 5118 flags.go:64] FLAG: --contention-profiling="false" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894773 5118 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894778 5118 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894782 5118 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894785 5118 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894791 5118 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894795 5118 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894799 5118 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894803 5118 flags.go:64] FLAG: --enable-load-reader="false" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894806 5118 flags.go:64] FLAG: --enable-server="true" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894811 5118 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894815 5118 flags.go:64] FLAG: --event-burst="100" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894819 5118 flags.go:64] FLAG: --event-qps="50" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894823 5118 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894827 5118 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894831 5118 flags.go:64] FLAG: --eviction-hard="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894836 5118 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894840 5118 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894844 5118 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894848 5118 flags.go:64] FLAG: --eviction-soft="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894852 5118 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894856 5118 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894859 5118 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894865 5118 flags.go:64] FLAG: --experimental-mounter-path="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894869 5118 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894873 5118 flags.go:64] FLAG: --fail-swap-on="true" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894877 5118 flags.go:64] FLAG: --feature-gates="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894881 5118 flags.go:64] FLAG: --file-check-frequency="20s" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894885 5118 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894889 5118 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894893 5118 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894897 5118 flags.go:64] FLAG: --healthz-port="10248" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894901 5118 flags.go:64] FLAG: --help="false" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894904 5118 flags.go:64] FLAG: --hostname-override="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894908 5118 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894912 5118 flags.go:64] FLAG: --http-check-frequency="20s" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894916 5118 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894920 5118 flags.go:64] FLAG: --image-credential-provider-config="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894923 5118 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894927 5118 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894930 5118 flags.go:64] FLAG: --image-service-endpoint="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894934 5118 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894938 5118 flags.go:64] FLAG: --kube-api-burst="100" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894942 5118 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894946 5118 flags.go:64] FLAG: --kube-api-qps="50" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894950 5118 flags.go:64] FLAG: --kube-reserved="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894953 5118 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894957 5118 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894961 5118 flags.go:64] FLAG: --kubelet-cgroups="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894964 5118 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894969 5118 flags.go:64] FLAG: --lock-file="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894972 5118 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894976 5118 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894980 5118 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894986 5118 flags.go:64] FLAG: --log-json-split-stream="false" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894991 5118 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894994 5118 flags.go:64] FLAG: --log-text-split-stream="false" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.894998 5118 flags.go:64] FLAG: --logging-format="text" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895002 5118 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895006 5118 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895010 5118 flags.go:64] FLAG: --manifest-url="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895013 5118 flags.go:64] FLAG: --manifest-url-header="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895019 5118 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895023 5118 flags.go:64] FLAG: --max-open-files="1000000" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895028 5118 flags.go:64] FLAG: --max-pods="110" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895031 5118 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895035 5118 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895060 5118 flags.go:64] FLAG: --memory-manager-policy="None" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895064 5118 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895068 5118 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895072 5118 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895076 5118 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895087 5118 flags.go:64] FLAG: --node-status-max-images="50" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895093 5118 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895097 5118 flags.go:64] FLAG: --oom-score-adj="-999" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895102 5118 flags.go:64] FLAG: --pod-cidr="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895106 5118 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895114 5118 flags.go:64] FLAG: --pod-manifest-path="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895118 5118 flags.go:64] FLAG: --pod-max-pids="-1" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895123 5118 flags.go:64] FLAG: --pods-per-core="0" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895126 5118 flags.go:64] FLAG: --port="10250" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895130 5118 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895134 5118 flags.go:64] FLAG: --provider-id="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895138 5118 flags.go:64] FLAG: --qos-reserved="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895141 5118 flags.go:64] FLAG: --read-only-port="10255" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895145 5118 flags.go:64] FLAG: --register-node="true" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895148 5118 flags.go:64] FLAG: --register-schedulable="true" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895152 5118 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895161 5118 flags.go:64] FLAG: --registry-burst="10" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895165 5118 flags.go:64] FLAG: --registry-qps="5" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895168 5118 flags.go:64] FLAG: --reserved-cpus="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895173 5118 flags.go:64] FLAG: --reserved-memory="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895178 5118 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895182 5118 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895185 5118 flags.go:64] FLAG: --rotate-certificates="false" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895189 5118 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895193 5118 flags.go:64] FLAG: --runonce="false" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895197 5118 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895202 5118 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895206 5118 flags.go:64] FLAG: --seccomp-default="false" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895210 5118 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895214 5118 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895219 5118 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895224 5118 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895228 5118 flags.go:64] FLAG: --storage-driver-password="root" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895234 5118 flags.go:64] FLAG: --storage-driver-secure="false" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895237 5118 flags.go:64] FLAG: --storage-driver-table="stats" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895241 5118 flags.go:64] FLAG: --storage-driver-user="root" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895245 5118 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895248 5118 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895252 5118 flags.go:64] FLAG: --system-cgroups="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895256 5118 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895262 5118 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895265 5118 flags.go:64] FLAG: --tls-cert-file="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895269 5118 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895274 5118 flags.go:64] FLAG: --tls-min-version="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895278 5118 flags.go:64] FLAG: --tls-private-key-file="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895281 5118 flags.go:64] FLAG: --topology-manager-policy="none" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895285 5118 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895289 5118 flags.go:64] FLAG: --topology-manager-scope="container" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895295 5118 flags.go:64] FLAG: --v="2" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895300 5118 flags.go:64] FLAG: --version="false" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895305 5118 flags.go:64] FLAG: --vmodule="" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895310 5118 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895315 5118 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895435 5118 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895440 5118 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895444 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895448 5118 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895452 5118 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895455 5118 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895459 5118 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895462 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895466 5118 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895469 5118 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895473 5118 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895476 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895482 5118 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895485 5118 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895489 5118 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895492 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895495 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895499 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895502 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895505 5118 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895509 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895512 5118 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895515 5118 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895518 5118 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895522 5118 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895525 5118 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895528 5118 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895533 5118 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895537 5118 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895540 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895543 5118 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895548 5118 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895551 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895555 5118 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895558 5118 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895561 5118 feature_gate.go:328] unrecognized feature gate: Example Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895565 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895568 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895572 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895575 5118 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895578 5118 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895582 5118 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895585 5118 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895588 5118 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895594 5118 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895598 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895601 5118 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895604 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895608 5118 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895611 5118 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895614 5118 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895618 5118 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895622 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895625 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895629 5118 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895632 5118 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895636 5118 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895639 5118 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895642 5118 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895648 5118 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895651 5118 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895655 5118 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895659 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895663 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895666 5118 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895671 5118 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895674 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895678 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895682 5118 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895705 5118 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895711 5118 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895716 5118 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895720 5118 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895725 5118 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895728 5118 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895732 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895738 5118 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895742 5118 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895746 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895749 5118 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895753 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895756 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895759 5118 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895763 5118 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895766 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.895770 5118 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.895775 5118 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.906248 5118 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.906459 5118 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.907894 5118 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908297 5118 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908319 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908324 5118 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908329 5118 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908334 5118 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908341 5118 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908347 5118 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908353 5118 feature_gate.go:328] unrecognized feature gate: Example Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908359 5118 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908364 5118 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908369 5118 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908373 5118 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908377 5118 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908380 5118 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908385 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908389 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908393 5118 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908398 5118 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908402 5118 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908407 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908411 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908415 5118 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908419 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908423 5118 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908426 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908430 5118 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908434 5118 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908438 5118 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908442 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908446 5118 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908449 5118 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908454 5118 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908462 5118 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908471 5118 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908476 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908481 5118 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908486 5118 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908491 5118 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908496 5118 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908502 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908508 5118 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908512 5118 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908517 5118 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908521 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908526 5118 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908530 5118 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908534 5118 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908539 5118 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908543 5118 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908548 5118 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908552 5118 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908556 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908560 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908564 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908568 5118 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908572 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908577 5118 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908830 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908835 5118 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908839 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908844 5118 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908848 5118 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908853 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908861 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908868 5118 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908875 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908880 5118 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908884 5118 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908889 5118 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908893 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908897 5118 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908902 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908906 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908911 5118 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908914 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908919 5118 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908924 5118 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908928 5118 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908932 5118 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908937 5118 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908940 5118 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908943 5118 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908946 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908950 5118 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.908953 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.908961 5118 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909144 5118 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909152 5118 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909157 5118 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909161 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909166 5118 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909171 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909177 5118 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909181 5118 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909187 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909191 5118 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909194 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909197 5118 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909201 5118 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909205 5118 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909208 5118 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909211 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909214 5118 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909217 5118 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909221 5118 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909224 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909227 5118 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909231 5118 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909234 5118 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909239 5118 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909243 5118 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909247 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909250 5118 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909253 5118 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909256 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909260 5118 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909263 5118 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909266 5118 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909269 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909272 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909276 5118 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909279 5118 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909283 5118 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909287 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909291 5118 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909296 5118 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909299 5118 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909303 5118 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909307 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909310 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909313 5118 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909316 5118 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909319 5118 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909323 5118 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909326 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909329 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909333 5118 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909336 5118 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909339 5118 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909342 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909346 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909352 5118 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909356 5118 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909360 5118 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909364 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909368 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909372 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909376 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909380 5118 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909384 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909388 5118 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909392 5118 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909396 5118 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909401 5118 feature_gate.go:328] unrecognized feature gate: Example Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909405 5118 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909409 5118 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909415 5118 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909420 5118 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909424 5118 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909428 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909433 5118 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909438 5118 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909442 5118 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909446 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909450 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909454 5118 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909459 5118 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909464 5118 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909468 5118 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909472 5118 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909476 5118 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 19:29:07 crc kubenswrapper[5118]: W1208 19:29:07.909480 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.909488 5118 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.909963 5118 server.go:962] "Client rotation is on, will bootstrap in background" Dec 08 19:29:07 crc kubenswrapper[5118]: E1208 19:29:07.912551 5118 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.916940 5118 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.917612 5118 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.919472 5118 server.go:1019] "Starting client certificate rotation" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.919709 5118 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.919814 5118 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.927630 5118 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 19:29:07 crc kubenswrapper[5118]: E1208 19:29:07.928657 5118 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.929892 5118 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.938163 5118 log.go:25] "Validated CRI v1 runtime API" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.958568 5118 log.go:25] "Validated CRI v1 image API" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.959899 5118 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.962080 5118 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-08-19-23-00-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.962123 5118 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.980156 5118 manager.go:217] Machine: {Timestamp:2025-12-08 19:29:07.978728599 +0000 UTC m=+0.271574076 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:38ff36e9-ea31-4d0f-b411-1d90f601ae3c BootID:80ade9b2-160d-493f-aadd-1db6165f9646 Filesystems:[{Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:18:99:c9 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:18:99:c9 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b7:64:4d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:e1:d8:97 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d5:01:74 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:38:e4:23 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:36:91:a5:33:f9:d9 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:fa:12:92:87:43:be Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.980464 5118 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.980865 5118 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.984094 5118 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.984152 5118 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.984377 5118 topology_manager.go:138] "Creating topology manager with none policy" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.984392 5118 container_manager_linux.go:306] "Creating device plugin manager" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.984418 5118 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.984615 5118 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.985080 5118 state_mem.go:36] "Initialized new in-memory state store" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.985242 5118 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.985859 5118 kubelet.go:491] "Attempting to sync node with API server" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.985876 5118 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.985890 5118 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.985902 5118 kubelet.go:397] "Adding apiserver pod source" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.985920 5118 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.987711 5118 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.987728 5118 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.988608 5118 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.988622 5118 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 08 19:29:07 crc kubenswrapper[5118]: E1208 19:29:07.988633 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 19:29:07 crc kubenswrapper[5118]: E1208 19:29:07.988969 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.990056 5118 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.990548 5118 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.991240 5118 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.991980 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.992027 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.992048 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.992064 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.992085 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.992101 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.992122 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.992138 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.992155 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.992199 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.992223 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.992427 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.992857 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.992881 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 08 19:29:07 crc kubenswrapper[5118]: I1208 19:29:07.994052 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.004286 5118 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.004396 5118 server.go:1295] "Started kubelet" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.004857 5118 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.004906 5118 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.006422 5118 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 08 19:29:08 crc systemd[1]: Started Kubernetes Kubelet. Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.007836 5118 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.007893 5118 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.151:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f5430e97310fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.004319486 +0000 UTC m=+0.297164963,LastTimestamp:2025-12-08 19:29:08.004319486 +0000 UTC m=+0.297164963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.008857 5118 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.008866 5118 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.009980 5118 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.010001 5118 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.009986 5118 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.010229 5118 server.go:317] "Adding debug handlers to kubelet server" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.010399 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.010775 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="200ms" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.015737 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.016332 5118 factory.go:55] Registering systemd factory Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.016399 5118 factory.go:223] Registration of the systemd container factory successfully Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.017018 5118 factory.go:153] Registering CRI-O factory Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.017052 5118 factory.go:223] Registration of the crio container factory successfully Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.017136 5118 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.017163 5118 factory.go:103] Registering Raw factory Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.017181 5118 manager.go:1196] Started watching for new ooms in manager Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.017982 5118 manager.go:319] Starting recovery of all containers Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038652 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038728 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038744 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038779 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038795 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038806 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038816 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038827 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038838 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038847 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038858 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038877 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038890 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038901 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038914 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038924 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038937 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038947 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038958 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038968 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.038978 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.039026 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.039039 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.039050 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.039061 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.039072 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.039082 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.039116 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.039133 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.039145 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.039157 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.039171 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.039184 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.039198 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.039209 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.039222 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.039234 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.039246 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.040473 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.040502 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.040527 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.040542 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.040554 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.040572 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.040584 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.040602 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.040613 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.040626 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.040752 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.040777 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.040798 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.040813 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.040831 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.040845 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.040859 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.040877 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.041990 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042067 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042100 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042116 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042142 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042158 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042176 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042199 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042214 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042237 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042253 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042276 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042291 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042308 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042330 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042346 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042370 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042384 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042407 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042423 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042439 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042461 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042477 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042498 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042516 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042529 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042550 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042567 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042587 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042602 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042626 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042644 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042659 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042676 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042707 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042726 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042742 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042763 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042789 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042808 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042829 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042842 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042861 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042874 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042899 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042914 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042928 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042947 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042961 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.042980 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043025 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043040 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043059 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043076 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043094 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043107 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043163 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043182 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043195 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043213 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043225 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043238 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043260 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043273 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043290 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043302 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043318 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043330 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043341 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043359 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043370 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043386 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043398 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043413 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043426 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043437 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043451 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043463 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043504 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043518 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043532 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043544 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043557 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043571 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043583 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043600 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043612 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043624 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043637 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043650 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043666 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043679 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043708 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043720 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043732 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043746 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043757 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043772 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043783 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043800 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043817 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043828 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043843 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043854 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043868 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043879 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043892 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043903 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043915 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043931 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043943 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.039210 5118 manager.go:324] Recovery completed Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.043956 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044084 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044118 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044145 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044159 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044175 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044188 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044200 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044216 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044228 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044243 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044254 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044269 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044287 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044303 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044317 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044329 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044344 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044358 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044372 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044384 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044395 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044428 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044438 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044451 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044462 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044473 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044488 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044505 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044526 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044539 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044552 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044564 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.044576 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051258 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051281 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051304 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051320 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051335 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051349 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051361 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051372 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051387 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051396 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051425 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051435 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051448 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051458 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051469 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051484 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051499 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051518 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051536 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051553 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051573 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051600 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051617 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051743 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051756 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051773 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051789 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051802 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051816 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051829 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051843 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051855 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051871 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051883 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051896 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051911 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051923 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051938 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051949 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.051964 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.053669 5118 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.053738 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.053755 5118 reconstruct.go:97] "Volume reconstruction finished" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.053764 5118 reconciler.go:26] "Reconciler: start to sync state" Dec 08 19:29:08 crc kubenswrapper[5118]: W1208 19:29:08.065191 5118 watcher.go:93] Error while processing event ("/sys/fs/cgroup/system.slice/ocp-userpasswords.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/system.slice/ocp-userpasswords.service: no such file or directory Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.071037 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.072825 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.072862 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.072875 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.073664 5118 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.073674 5118 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.073714 5118 state_mem.go:36] "Initialized new in-memory state store" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.090756 5118 policy_none.go:49] "None policy: Start" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.090810 5118 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.090830 5118 state_mem.go:35] "Initializing new in-memory state store" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.092822 5118 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.095131 5118 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.095199 5118 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.095238 5118 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.095251 5118 kubelet.go:2451] "Starting kubelet main sync loop" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.095303 5118 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.097263 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.116799 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.143164 5118 manager.go:341] "Starting Device Plugin manager" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.143235 5118 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.143251 5118 server.go:85] "Starting device plugin registration server" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.143818 5118 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.143840 5118 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.144005 5118 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.144142 5118 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.144150 5118 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.147783 5118 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.147865 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.196407 5118 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.196625 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.197680 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.197752 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.197762 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.198444 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.198935 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.199029 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.199049 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.199058 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.199068 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.199877 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.200003 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.200025 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.200071 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.200045 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.200084 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.201013 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.201062 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.201084 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.201345 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.201411 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.201424 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.202346 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.202394 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.202436 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.202938 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.202983 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.202997 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.202980 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.203104 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.203118 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.203783 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.203830 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.203877 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.204459 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.204473 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.204502 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.204525 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.204538 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.204542 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.205669 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.205748 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.206242 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.206270 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.206284 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.211602 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="400ms" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.244040 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.245256 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.245295 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.245306 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.245331 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.245784 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.246232 5118 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.151:6443: connect: connection refused" node="crc" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.255560 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.257456 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.257498 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.257532 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.257795 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.257825 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.257842 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.257857 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.257897 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.257913 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.257929 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.257944 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.257957 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.258449 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.258552 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.259123 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.259118 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.259380 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.259182 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.260035 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.260108 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.260353 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.260392 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.260429 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.260425 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.260925 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.260948 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.260967 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.260999 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.261475 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.262079 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.279672 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.306877 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.311305 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.362502 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.362778 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.362811 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.362836 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.362889 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.362938 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.362959 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.362977 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.362997 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363012 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363030 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363111 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363221 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363252 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363235 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363266 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363320 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363297 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363319 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363463 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363481 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363574 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363267 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363342 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363547 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363340 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363627 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363366 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363753 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363804 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363848 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.363971 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.447099 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.449043 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.449109 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.449121 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.449156 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.449976 5118 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.151:6443: connect: connection refused" node="crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.547284 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.555802 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: W1208 19:29:08.573641 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-682f84228dde199a638282e2f0f5cb0df87f1e9f638931e4b3cdb548973d6dc3 WatchSource:0}: Error finding container 682f84228dde199a638282e2f0f5cb0df87f1e9f638931e4b3cdb548973d6dc3: Status 404 returned error can't find the container with id 682f84228dde199a638282e2f0f5cb0df87f1e9f638931e4b3cdb548973d6dc3 Dec 08 19:29:08 crc kubenswrapper[5118]: W1208 19:29:08.576422 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-9837aad554fc82388fbcb8df51703f3590794843c2172f049edd6834ab2a0c36 WatchSource:0}: Error finding container 9837aad554fc82388fbcb8df51703f3590794843c2172f049edd6834ab2a0c36: Status 404 returned error can't find the container with id 9837aad554fc82388fbcb8df51703f3590794843c2172f049edd6834ab2a0c36 Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.576577 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.580613 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: W1208 19:29:08.601442 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-1e36d40fa7c958f8d2ea6473ae5f1fe3387c073888ba993313abe03fd710f202 WatchSource:0}: Error finding container 1e36d40fa7c958f8d2ea6473ae5f1fe3387c073888ba993313abe03fd710f202: Status 404 returned error can't find the container with id 1e36d40fa7c958f8d2ea6473ae5f1fe3387c073888ba993313abe03fd710f202 Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.607595 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.611939 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.612612 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="800ms" Dec 08 19:29:08 crc kubenswrapper[5118]: W1208 19:29:08.630556 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-5c8f89524719f36b2ad913bb25fe98771979e0787a59c4354f2dfa7429eefb04 WatchSource:0}: Error finding container 5c8f89524719f36b2ad913bb25fe98771979e0787a59c4354f2dfa7429eefb04: Status 404 returned error can't find the container with id 5c8f89524719f36b2ad913bb25fe98771979e0787a59c4354f2dfa7429eefb04 Dec 08 19:29:08 crc kubenswrapper[5118]: W1208 19:29:08.636102 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-89bc7edfb1a95c702ca61a69c94433363edc836971a67a7199f5ddef06e5d3f0 WatchSource:0}: Error finding container 89bc7edfb1a95c702ca61a69c94433363edc836971a67a7199f5ddef06e5d3f0: Status 404 returned error can't find the container with id 89bc7edfb1a95c702ca61a69c94433363edc836971a67a7199f5ddef06e5d3f0 Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.850316 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.851606 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.851667 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.851680 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.851726 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.852219 5118 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.151:6443: connect: connection refused" node="crc" Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.895938 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 19:29:08 crc kubenswrapper[5118]: I1208 19:29:08.995059 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Dec 08 19:29:08 crc kubenswrapper[5118]: E1208 19:29:08.995226 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 19:29:09 crc kubenswrapper[5118]: I1208 19:29:09.100079 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"89bc7edfb1a95c702ca61a69c94433363edc836971a67a7199f5ddef06e5d3f0"} Dec 08 19:29:09 crc kubenswrapper[5118]: I1208 19:29:09.101224 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"5c8f89524719f36b2ad913bb25fe98771979e0787a59c4354f2dfa7429eefb04"} Dec 08 19:29:09 crc kubenswrapper[5118]: I1208 19:29:09.102044 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"1e36d40fa7c958f8d2ea6473ae5f1fe3387c073888ba993313abe03fd710f202"} Dec 08 19:29:09 crc kubenswrapper[5118]: I1208 19:29:09.103225 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"9837aad554fc82388fbcb8df51703f3590794843c2172f049edd6834ab2a0c36"} Dec 08 19:29:09 crc kubenswrapper[5118]: I1208 19:29:09.104056 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"682f84228dde199a638282e2f0f5cb0df87f1e9f638931e4b3cdb548973d6dc3"} Dec 08 19:29:09 crc kubenswrapper[5118]: E1208 19:29:09.407333 5118 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.151:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f5430e97310fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.004319486 +0000 UTC m=+0.297164963,LastTimestamp:2025-12-08 19:29:08.004319486 +0000 UTC m=+0.297164963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:09 crc kubenswrapper[5118]: E1208 19:29:09.413142 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="1.6s" Dec 08 19:29:09 crc kubenswrapper[5118]: E1208 19:29:09.500162 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 19:29:09 crc kubenswrapper[5118]: E1208 19:29:09.536927 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 19:29:09 crc kubenswrapper[5118]: I1208 19:29:09.653276 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:09 crc kubenswrapper[5118]: I1208 19:29:09.654337 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:09 crc kubenswrapper[5118]: I1208 19:29:09.654368 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:09 crc kubenswrapper[5118]: I1208 19:29:09.654377 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:09 crc kubenswrapper[5118]: I1208 19:29:09.654399 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:09 crc kubenswrapper[5118]: E1208 19:29:09.654862 5118 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.151:6443: connect: connection refused" node="crc" Dec 08 19:29:09 crc kubenswrapper[5118]: I1208 19:29:09.993645 5118 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 19:29:09 crc kubenswrapper[5118]: E1208 19:29:09.994619 5118 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 08 19:29:09 crc kubenswrapper[5118]: I1208 19:29:09.994732 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.108884 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"f96b5c895f4872869c8afd92cd4a2f5eb829c355a2e35dc83c6741c426f42ebc"} Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.108936 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"ed5e8a16f16345b28c7907efe04e4b3856cbade55bdb538fc7f3790a7e71d583"} Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.110120 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa" exitCode=0 Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.110284 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.110542 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa"} Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.111670 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.111732 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.111747 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:10 crc kubenswrapper[5118]: E1208 19:29:10.111948 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.113080 5118 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="279ee597b69db21a603f5b68418bf5407320d57b51c75b7dc9f56776ec109ce6" exitCode=0 Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.113138 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"279ee597b69db21a603f5b68418bf5407320d57b51c75b7dc9f56776ec109ce6"} Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.113262 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.113721 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.114208 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.114233 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.114242 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:10 crc kubenswrapper[5118]: E1208 19:29:10.114444 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.114862 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.114898 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.114909 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.117389 5118 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="a11a3b45932159b09f23c3607b243fe3b8c6ed6ef187c042f0cf0a14d0942ec1" exitCode=0 Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.117452 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"a11a3b45932159b09f23c3607b243fe3b8c6ed6ef187c042f0cf0a14d0942ec1"} Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.117553 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.118779 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.118804 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.118815 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:10 crc kubenswrapper[5118]: E1208 19:29:10.118959 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.125428 5118 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="4612f6ac3bc67751af221e89ef805795a861f7191d624aed54574e54938eb495" exitCode=0 Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.125468 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"4612f6ac3bc67751af221e89ef805795a861f7191d624aed54574e54938eb495"} Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.125613 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.126203 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.126237 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.126253 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:10 crc kubenswrapper[5118]: E1208 19:29:10.126510 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:10 crc kubenswrapper[5118]: I1208 19:29:10.995435 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Dec 08 19:29:11 crc kubenswrapper[5118]: E1208 19:29:11.014322 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="3.2s" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.140313 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4817842e9634a68433d70a28cd0b33c1b59095c64add0a62c2b6fd0ab631f1d7"} Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.140377 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"3b7f60e5988ac995422f8474669dd797d1a34954667133db76927b3e50a6ffd9"} Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.140390 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"544909c828d1ef83c8368679ae033c6c9d29d544f7fd68216170667e54678dfa"} Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.140402 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"77550c63414c65bb016bee1a9d97583c8688d5089b08f1e5d15d794eea50fea2"} Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.144994 5118 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="efd7bb6b3cbab052623ca5755b789764a6909190a393b9d46358d177a45dee74" exitCode=0 Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.145062 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"efd7bb6b3cbab052623ca5755b789764a6909190a393b9d46358d177a45dee74"} Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.145222 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.146365 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.146397 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.146409 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:11 crc kubenswrapper[5118]: E1208 19:29:11.146613 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.149181 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"5f4b44aaf2cdcfc560006f179f3c73d2f8d9096fca618c7ca57c8230fd49c15a"} Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.149220 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.150562 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.150604 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.150617 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:11 crc kubenswrapper[5118]: E1208 19:29:11.150781 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.154899 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"4115e50a4084c607cd9530d3ab0e2b96fee8dbc9af125d400209816dc621f62d"} Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.154956 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"ad051eb042181f65b6862f8f0f09916b05c9fcd8e66d8642c1f86ae78267d1c7"} Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.154967 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"c34e38756564d5facd2424d608df2958fd9546536f3c41cac83e9bfd12c30913"} Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.155035 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.156073 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.156097 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.156106 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:11 crc kubenswrapper[5118]: E1208 19:29:11.156300 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.159100 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"027e114c2682c800b54ad673ffaf9a3e6d2e4b1b44a3395f348dfc94c54ddc30"} Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.159135 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"638dff118a255984c06222c27a23f2f72f75f5f45043827e4866fd6e5ad9efa6"} Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.159318 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.159941 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.159983 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.159993 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:11 crc kubenswrapper[5118]: E1208 19:29:11.160256 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.255900 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.261217 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.261262 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.261274 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.261297 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:11 crc kubenswrapper[5118]: I1208 19:29:11.964216 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.167647 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7dbda37255dc14b3b9e3b93edd2c7db8cadf83544c8c1cc75f1802c67015635a"} Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.167985 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.169036 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.169096 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.169124 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:12 crc kubenswrapper[5118]: E1208 19:29:12.169569 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.171150 5118 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="ece9b7fc3121bc4f834fa2e993f2066c32baa9563e15facc11ca12149390fd18" exitCode=0 Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.171298 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.171337 5118 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.171380 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.171383 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"ece9b7fc3121bc4f834fa2e993f2066c32baa9563e15facc11ca12149390fd18"} Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.171550 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.171752 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.172270 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.172312 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.172326 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.172518 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.172571 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.172599 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.172668 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.172724 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.172740 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:12 crc kubenswrapper[5118]: E1208 19:29:12.172832 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.173268 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:12 crc kubenswrapper[5118]: E1208 19:29:12.173295 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.173319 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:12 crc kubenswrapper[5118]: E1208 19:29:12.173334 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:12 crc kubenswrapper[5118]: I1208 19:29:12.173342 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:12 crc kubenswrapper[5118]: E1208 19:29:12.174222 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:13 crc kubenswrapper[5118]: I1208 19:29:13.177502 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"9502f9a9fc4385a11375d1454dc563a79e935d00cf846d1cba59363a82cdebf4"} Dec 08 19:29:13 crc kubenswrapper[5118]: I1208 19:29:13.177588 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"87524e06886d743364c19bf1d1cbd1e8c7e9be19424206ec6b49d02a770729ac"} Dec 08 19:29:13 crc kubenswrapper[5118]: I1208 19:29:13.177604 5118 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 19:29:13 crc kubenswrapper[5118]: I1208 19:29:13.177620 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"16184f6f1982588e4aacf024dca32892985c428914dfab58baf03a3e0a296cbb"} Dec 08 19:29:13 crc kubenswrapper[5118]: I1208 19:29:13.177645 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"c054019c6129f78ea8bc4f9abd8a9cb3f052c4b135ce01e75b822c97ba27de97"} Dec 08 19:29:13 crc kubenswrapper[5118]: I1208 19:29:13.177652 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:13 crc kubenswrapper[5118]: I1208 19:29:13.177825 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:13 crc kubenswrapper[5118]: I1208 19:29:13.178424 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:13 crc kubenswrapper[5118]: I1208 19:29:13.178454 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:13 crc kubenswrapper[5118]: I1208 19:29:13.178463 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:13 crc kubenswrapper[5118]: E1208 19:29:13.178772 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:13 crc kubenswrapper[5118]: I1208 19:29:13.179055 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:13 crc kubenswrapper[5118]: I1208 19:29:13.179093 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:13 crc kubenswrapper[5118]: I1208 19:29:13.179106 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:13 crc kubenswrapper[5118]: E1208 19:29:13.179429 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:13 crc kubenswrapper[5118]: I1208 19:29:13.351760 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:14 crc kubenswrapper[5118]: I1208 19:29:14.007576 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:14 crc kubenswrapper[5118]: I1208 19:29:14.014822 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:14 crc kubenswrapper[5118]: I1208 19:29:14.188404 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"2e23a47c43a333a7dbc87ffbd2d9968813080ef443b1706e946996bd22bd6785"} Dec 08 19:29:14 crc kubenswrapper[5118]: I1208 19:29:14.188567 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:14 crc kubenswrapper[5118]: I1208 19:29:14.188646 5118 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 19:29:14 crc kubenswrapper[5118]: I1208 19:29:14.188655 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:14 crc kubenswrapper[5118]: I1208 19:29:14.188720 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:14 crc kubenswrapper[5118]: I1208 19:29:14.188809 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:14 crc kubenswrapper[5118]: I1208 19:29:14.190761 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:14 crc kubenswrapper[5118]: I1208 19:29:14.190810 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:14 crc kubenswrapper[5118]: I1208 19:29:14.190832 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:14 crc kubenswrapper[5118]: I1208 19:29:14.190868 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:14 crc kubenswrapper[5118]: I1208 19:29:14.190836 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:14 crc kubenswrapper[5118]: I1208 19:29:14.190974 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:14 crc kubenswrapper[5118]: I1208 19:29:14.190990 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:14 crc kubenswrapper[5118]: I1208 19:29:14.190870 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:14 crc kubenswrapper[5118]: I1208 19:29:14.191086 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:14 crc kubenswrapper[5118]: E1208 19:29:14.191442 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:14 crc kubenswrapper[5118]: E1208 19:29:14.191515 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:14 crc kubenswrapper[5118]: E1208 19:29:14.191959 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:14 crc kubenswrapper[5118]: I1208 19:29:14.234495 5118 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 19:29:14 crc kubenswrapper[5118]: I1208 19:29:14.615443 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 08 19:29:14 crc kubenswrapper[5118]: I1208 19:29:14.703851 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:15 crc kubenswrapper[5118]: I1208 19:29:15.191917 5118 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 19:29:15 crc kubenswrapper[5118]: I1208 19:29:15.191984 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:15 crc kubenswrapper[5118]: I1208 19:29:15.192007 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:15 crc kubenswrapper[5118]: I1208 19:29:15.191957 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:15 crc kubenswrapper[5118]: I1208 19:29:15.193254 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:15 crc kubenswrapper[5118]: I1208 19:29:15.193338 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:15 crc kubenswrapper[5118]: I1208 19:29:15.193293 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:15 crc kubenswrapper[5118]: I1208 19:29:15.193392 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:15 crc kubenswrapper[5118]: I1208 19:29:15.193423 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:15 crc kubenswrapper[5118]: I1208 19:29:15.193443 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:15 crc kubenswrapper[5118]: I1208 19:29:15.193526 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:15 crc kubenswrapper[5118]: I1208 19:29:15.193602 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:15 crc kubenswrapper[5118]: I1208 19:29:15.193623 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:15 crc kubenswrapper[5118]: E1208 19:29:15.194231 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:15 crc kubenswrapper[5118]: E1208 19:29:15.194429 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:15 crc kubenswrapper[5118]: E1208 19:29:15.195302 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:15 crc kubenswrapper[5118]: I1208 19:29:15.798383 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:15 crc kubenswrapper[5118]: I1208 19:29:15.798591 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:15 crc kubenswrapper[5118]: I1208 19:29:15.799551 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:15 crc kubenswrapper[5118]: I1208 19:29:15.799583 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:15 crc kubenswrapper[5118]: I1208 19:29:15.799594 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:15 crc kubenswrapper[5118]: E1208 19:29:15.799894 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:16 crc kubenswrapper[5118]: I1208 19:29:16.195063 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:16 crc kubenswrapper[5118]: I1208 19:29:16.196127 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:16 crc kubenswrapper[5118]: I1208 19:29:16.196199 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:16 crc kubenswrapper[5118]: I1208 19:29:16.196268 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:16 crc kubenswrapper[5118]: E1208 19:29:16.197352 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:18 crc kubenswrapper[5118]: I1208 19:29:18.012410 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 08 19:29:18 crc kubenswrapper[5118]: I1208 19:29:18.013832 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:18 crc kubenswrapper[5118]: I1208 19:29:18.015091 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:18 crc kubenswrapper[5118]: I1208 19:29:18.015283 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:18 crc kubenswrapper[5118]: I1208 19:29:18.015437 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:18 crc kubenswrapper[5118]: E1208 19:29:18.016333 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:18 crc kubenswrapper[5118]: E1208 19:29:18.148282 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:29:18 crc kubenswrapper[5118]: I1208 19:29:18.220303 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:18 crc kubenswrapper[5118]: I1208 19:29:18.220648 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:18 crc kubenswrapper[5118]: I1208 19:29:18.221658 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:18 crc kubenswrapper[5118]: I1208 19:29:18.221723 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:18 crc kubenswrapper[5118]: I1208 19:29:18.221734 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:18 crc kubenswrapper[5118]: E1208 19:29:18.222092 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:18 crc kubenswrapper[5118]: I1208 19:29:18.864933 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:18 crc kubenswrapper[5118]: I1208 19:29:18.865326 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:18 crc kubenswrapper[5118]: I1208 19:29:18.866789 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:18 crc kubenswrapper[5118]: I1208 19:29:18.866857 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:18 crc kubenswrapper[5118]: I1208 19:29:18.866879 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:18 crc kubenswrapper[5118]: E1208 19:29:18.867465 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:21 crc kubenswrapper[5118]: E1208 19:29:21.262428 5118 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 08 19:29:21 crc kubenswrapper[5118]: I1208 19:29:21.392548 5118 trace.go:236] Trace[1532283595]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 19:29:11.391) (total time: 10000ms): Dec 08 19:29:21 crc kubenswrapper[5118]: Trace[1532283595]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (19:29:21.392) Dec 08 19:29:21 crc kubenswrapper[5118]: Trace[1532283595]: [10.000919937s] [10.000919937s] END Dec 08 19:29:21 crc kubenswrapper[5118]: E1208 19:29:21.392591 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 19:29:21 crc kubenswrapper[5118]: I1208 19:29:21.531968 5118 trace.go:236] Trace[361047052]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 19:29:11.530) (total time: 10001ms): Dec 08 19:29:21 crc kubenswrapper[5118]: Trace[361047052]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:29:21.531) Dec 08 19:29:21 crc kubenswrapper[5118]: Trace[361047052]: [10.001513299s] [10.001513299s] END Dec 08 19:29:21 crc kubenswrapper[5118]: E1208 19:29:21.532002 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 19:29:21 crc kubenswrapper[5118]: I1208 19:29:21.865804 5118 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Dec 08 19:29:21 crc kubenswrapper[5118]: I1208 19:29:21.865947 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Dec 08 19:29:21 crc kubenswrapper[5118]: I1208 19:29:21.934443 5118 trace.go:236] Trace[1728901062]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 19:29:11.932) (total time: 10001ms): Dec 08 19:29:21 crc kubenswrapper[5118]: Trace[1728901062]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:29:21.934) Dec 08 19:29:21 crc kubenswrapper[5118]: Trace[1728901062]: [10.001452979s] [10.001452979s] END Dec 08 19:29:21 crc kubenswrapper[5118]: E1208 19:29:21.934948 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 19:29:21 crc kubenswrapper[5118]: I1208 19:29:21.996875 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 08 19:29:22 crc kubenswrapper[5118]: I1208 19:29:22.028267 5118 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 19:29:22 crc kubenswrapper[5118]: I1208 19:29:22.028370 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 08 19:29:22 crc kubenswrapper[5118]: I1208 19:29:22.036001 5118 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 19:29:22 crc kubenswrapper[5118]: I1208 19:29:22.036437 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 08 19:29:23 crc kubenswrapper[5118]: I1208 19:29:23.357634 5118 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]log ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]etcd ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/generic-apiserver-start-informers ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/priority-and-fairness-filter ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/start-apiextensions-informers ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/start-apiextensions-controllers ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/crd-informer-synced ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/start-system-namespaces-controller ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 08 19:29:23 crc kubenswrapper[5118]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/bootstrap-controller ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/start-kube-aggregator-informers ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/apiservice-registration-controller ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/apiservice-discovery-controller ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]autoregister-completion ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/apiservice-openapi-controller ok Dec 08 19:29:23 crc kubenswrapper[5118]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 08 19:29:23 crc kubenswrapper[5118]: livez check failed Dec 08 19:29:23 crc kubenswrapper[5118]: I1208 19:29:23.357760 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:29:24 crc kubenswrapper[5118]: E1208 19:29:24.215302 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 08 19:29:24 crc kubenswrapper[5118]: I1208 19:29:24.463427 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:24 crc kubenswrapper[5118]: I1208 19:29:24.464857 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:24 crc kubenswrapper[5118]: I1208 19:29:24.464932 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:24 crc kubenswrapper[5118]: I1208 19:29:24.464961 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:24 crc kubenswrapper[5118]: I1208 19:29:24.465009 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:24 crc kubenswrapper[5118]: E1208 19:29:24.482019 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:29:26 crc kubenswrapper[5118]: E1208 19:29:26.498034 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 19:29:26 crc kubenswrapper[5118]: E1208 19:29:26.606420 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 19:29:26 crc kubenswrapper[5118]: E1208 19:29:26.712385 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 19:29:27 crc kubenswrapper[5118]: I1208 19:29:27.036767 5118 trace.go:236] Trace[215685538]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 19:29:12.038) (total time: 14998ms): Dec 08 19:29:27 crc kubenswrapper[5118]: Trace[215685538]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 14998ms (19:29:27.036) Dec 08 19:29:27 crc kubenswrapper[5118]: Trace[215685538]: [14.998350388s] [14.998350388s] END Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.037250 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.036870 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430e97310fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.004319486 +0000 UTC m=+0.297164963,LastTimestamp:2025-12-08 19:29:08.004319486 +0000 UTC m=+0.297164963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.039820 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed88b810 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072847376 +0000 UTC m=+0.365692833,LastTimestamp:2025-12-08 19:29:08.072847376 +0000 UTC m=+0.365692833,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: I1208 19:29:27.044264 5118 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.045031 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed890f18 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072869656 +0000 UTC m=+0.365715113,LastTimestamp:2025-12-08 19:29:08.072869656 +0000 UTC m=+0.365715113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.051416 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed894191 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072882577 +0000 UTC m=+0.365728034,LastTimestamp:2025-12-08 19:29:08.072882577 +0000 UTC m=+0.365728034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.059210 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430f1eb414f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.146413903 +0000 UTC m=+0.439259360,LastTimestamp:2025-12-08 19:29:08.146413903 +0000 UTC m=+0.439259360,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.068638 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430ed88b810\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed88b810 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072847376 +0000 UTC m=+0.365692833,LastTimestamp:2025-12-08 19:29:08.197723813 +0000 UTC m=+0.490569270,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.074589 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430ed890f18\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed890f18 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072869656 +0000 UTC m=+0.365715113,LastTimestamp:2025-12-08 19:29:08.197757844 +0000 UTC m=+0.490603301,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.085966 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430ed894191\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed894191 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072882577 +0000 UTC m=+0.365728034,LastTimestamp:2025-12-08 19:29:08.197767104 +0000 UTC m=+0.490612561,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: I1208 19:29:27.089271 5118 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40672->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 08 19:29:27 crc kubenswrapper[5118]: I1208 19:29:27.089404 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40672->192.168.126.11:17697: read: connection reset by peer" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.095299 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430ed88b810\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed88b810 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072847376 +0000 UTC m=+0.365692833,LastTimestamp:2025-12-08 19:29:08.199039648 +0000 UTC m=+0.491885105,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.105563 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430ed890f18\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed890f18 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072869656 +0000 UTC m=+0.365715113,LastTimestamp:2025-12-08 19:29:08.199054189 +0000 UTC m=+0.491899646,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.111863 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430ed894191\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed894191 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072882577 +0000 UTC m=+0.365728034,LastTimestamp:2025-12-08 19:29:08.199063239 +0000 UTC m=+0.491908696,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.119681 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430ed88b810\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed88b810 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072847376 +0000 UTC m=+0.365692833,LastTimestamp:2025-12-08 19:29:08.200056286 +0000 UTC m=+0.492901743,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.125893 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430ed890f18\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed890f18 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072869656 +0000 UTC m=+0.365715113,LastTimestamp:2025-12-08 19:29:08.200078556 +0000 UTC m=+0.492924013,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.133108 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430ed894191\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed894191 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072882577 +0000 UTC m=+0.365728034,LastTimestamp:2025-12-08 19:29:08.200090267 +0000 UTC m=+0.492935724,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.137955 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430ed88b810\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed88b810 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072847376 +0000 UTC m=+0.365692833,LastTimestamp:2025-12-08 19:29:08.201044023 +0000 UTC m=+0.493889520,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.143048 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430ed890f18\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed890f18 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072869656 +0000 UTC m=+0.365715113,LastTimestamp:2025-12-08 19:29:08.201074104 +0000 UTC m=+0.493919601,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.153504 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430ed894191\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed894191 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072882577 +0000 UTC m=+0.365728034,LastTimestamp:2025-12-08 19:29:08.201093524 +0000 UTC m=+0.493939021,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.158793 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430ed88b810\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed88b810 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072847376 +0000 UTC m=+0.365692833,LastTimestamp:2025-12-08 19:29:08.201396412 +0000 UTC m=+0.494241879,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.164988 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430ed890f18\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed890f18 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072869656 +0000 UTC m=+0.365715113,LastTimestamp:2025-12-08 19:29:08.201418683 +0000 UTC m=+0.494264140,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.171235 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430ed894191\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed894191 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072882577 +0000 UTC m=+0.365728034,LastTimestamp:2025-12-08 19:29:08.201429693 +0000 UTC m=+0.494275150,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.177626 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430ed88b810\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed88b810 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072847376 +0000 UTC m=+0.365692833,LastTimestamp:2025-12-08 19:29:08.202975525 +0000 UTC m=+0.495820982,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.183385 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430ed890f18\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed890f18 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072869656 +0000 UTC m=+0.365715113,LastTimestamp:2025-12-08 19:29:08.202989946 +0000 UTC m=+0.495835403,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.189932 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430ed894191\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed894191 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072882577 +0000 UTC m=+0.365728034,LastTimestamp:2025-12-08 19:29:08.203003126 +0000 UTC m=+0.495848583,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.195317 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430ed88b810\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed88b810 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072847376 +0000 UTC m=+0.365692833,LastTimestamp:2025-12-08 19:29:08.203092738 +0000 UTC m=+0.495938195,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.202296 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430ed890f18\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430ed890f18 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.072869656 +0000 UTC m=+0.365715113,LastTimestamp:2025-12-08 19:29:08.203112229 +0000 UTC m=+0.495957686,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.208110 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f54310b941399 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.576908185 +0000 UTC m=+0.869753642,LastTimestamp:2025-12-08 19:29:08.576908185 +0000 UTC m=+0.869753642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.213542 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f54310bae5243 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.578628163 +0000 UTC m=+0.871473620,LastTimestamp:2025-12-08 19:29:08.578628163 +0000 UTC m=+0.871473620,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.218440 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f54310d44fe56 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.60527983 +0000 UTC m=+0.898125287,LastTimestamp:2025-12-08 19:29:08.60527983 +0000 UTC m=+0.898125287,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.224833 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54310f106c2b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.635388971 +0000 UTC m=+0.928234428,LastTimestamp:2025-12-08 19:29:08.635388971 +0000 UTC m=+0.928234428,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.229989 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54310f47a0c8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.63900692 +0000 UTC m=+0.931852377,LastTimestamp:2025-12-08 19:29:08.63900692 +0000 UTC m=+0.931852377,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: I1208 19:29:27.235081 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.235281 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54312d4da21f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.142716959 +0000 UTC m=+1.435562416,LastTimestamp:2025-12-08 19:29:09.142716959 +0000 UTC m=+1.435562416,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: I1208 19:29:27.236890 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7dbda37255dc14b3b9e3b93edd2c7db8cadf83544c8c1cc75f1802c67015635a" exitCode=255 Dec 08 19:29:27 crc kubenswrapper[5118]: I1208 19:29:27.236947 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"7dbda37255dc14b3b9e3b93edd2c7db8cadf83544c8c1cc75f1802c67015635a"} Dec 08 19:29:27 crc kubenswrapper[5118]: I1208 19:29:27.237149 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:27 crc kubenswrapper[5118]: I1208 19:29:27.237751 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:27 crc kubenswrapper[5118]: I1208 19:29:27.237801 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:27 crc kubenswrapper[5118]: I1208 19:29:27.237812 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.238102 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:27 crc kubenswrapper[5118]: I1208 19:29:27.238389 5118 scope.go:117] "RemoveContainer" containerID="7dbda37255dc14b3b9e3b93edd2c7db8cadf83544c8c1cc75f1802c67015635a" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.242870 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f54312d4f3b78 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.142821752 +0000 UTC m=+1.435667209,LastTimestamp:2025-12-08 19:29:09.142821752 +0000 UTC m=+1.435667209,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.246897 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f54312d540571 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.143135601 +0000 UTC m=+1.435981058,LastTimestamp:2025-12-08 19:29:09.143135601 +0000 UTC m=+1.435981058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.252939 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54312d5a67a0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.143553952 +0000 UTC m=+1.436399409,LastTimestamp:2025-12-08 19:29:09.143553952 +0000 UTC m=+1.436399409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.261357 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f54312d66f7bb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.144377275 +0000 UTC m=+1.437222732,LastTimestamp:2025-12-08 19:29:09.144377275 +0000 UTC m=+1.437222732,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.271113 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f54312dfc02fd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.154145021 +0000 UTC m=+1.446990478,LastTimestamp:2025-12-08 19:29:09.154145021 +0000 UTC m=+1.446990478,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.277517 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54312e0a3bfc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.155077116 +0000 UTC m=+1.447922573,LastTimestamp:2025-12-08 19:29:09.155077116 +0000 UTC m=+1.447922573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.282946 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f54312e0b886a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.155162218 +0000 UTC m=+1.448007675,LastTimestamp:2025-12-08 19:29:09.155162218 +0000 UTC m=+1.448007675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.288461 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54312e13646d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.155677293 +0000 UTC m=+1.448522750,LastTimestamp:2025-12-08 19:29:09.155677293 +0000 UTC m=+1.448522750,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.294143 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54312e244323 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.156782883 +0000 UTC m=+1.449628330,LastTimestamp:2025-12-08 19:29:09.156782883 +0000 UTC m=+1.449628330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.298232 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f54312e43277a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.158807418 +0000 UTC m=+1.451652875,LastTimestamp:2025-12-08 19:29:09.158807418 +0000 UTC m=+1.451652875,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.308370 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54313f499b64 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.44444298 +0000 UTC m=+1.737288437,LastTimestamp:2025-12-08 19:29:09.44444298 +0000 UTC m=+1.737288437,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.316344 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f5431402bc614 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.459265044 +0000 UTC m=+1.752110531,LastTimestamp:2025-12-08 19:29:09.459265044 +0000 UTC m=+1.752110531,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.321503 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f5431404b8e47 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.461347911 +0000 UTC m=+1.754193398,LastTimestamp:2025-12-08 19:29:09.461347911 +0000 UTC m=+1.754193398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.327589 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f5431666c991d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.101047581 +0000 UTC m=+2.393893038,LastTimestamp:2025-12-08 19:29:10.101047581 +0000 UTC m=+2.393893038,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.334103 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5431672b0ea2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.113529506 +0000 UTC m=+2.406374973,LastTimestamp:2025-12-08 19:29:10.113529506 +0000 UTC m=+2.406374973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.341670 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431675d0e8e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.116806286 +0000 UTC m=+2.409651743,LastTimestamp:2025-12-08 19:29:10.116806286 +0000 UTC m=+2.409651743,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.355819 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f543167da8da8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.125030824 +0000 UTC m=+2.417876291,LastTimestamp:2025-12-08 19:29:10.125030824 +0000 UTC m=+2.417876291,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.371751 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f54316800dc73 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.127541363 +0000 UTC m=+2.420386820,LastTimestamp:2025-12-08 19:29:10.127541363 +0000 UTC m=+2.420386820,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.377073 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f54316808be0b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.128057867 +0000 UTC m=+2.420903324,LastTimestamp:2025-12-08 19:29:10.128057867 +0000 UTC m=+2.420903324,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.383122 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f5431680ebd71 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.128450929 +0000 UTC m=+2.421296386,LastTimestamp:2025-12-08 19:29:10.128450929 +0000 UTC m=+2.421296386,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.388278 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54317aa13711 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.440040209 +0000 UTC m=+2.732885686,LastTimestamp:2025-12-08 19:29:10.440040209 +0000 UTC m=+2.732885686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.394644 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f54317aa879e8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.440516072 +0000 UTC m=+2.733361529,LastTimestamp:2025-12-08 19:29:10.440516072 +0000 UTC m=+2.733361529,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.413310 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54317ab15533 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.441096499 +0000 UTC m=+2.733941956,LastTimestamp:2025-12-08 19:29:10.441096499 +0000 UTC m=+2.733941956,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.418464 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f54317ab46232 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.441296434 +0000 UTC m=+2.734141901,LastTimestamp:2025-12-08 19:29:10.441296434 +0000 UTC m=+2.734141901,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.426508 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f54317abbcd9f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.441782687 +0000 UTC m=+2.734628154,LastTimestamp:2025-12-08 19:29:10.441782687 +0000 UTC m=+2.734628154,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.434022 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54317b5b420d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.452232717 +0000 UTC m=+2.745078174,LastTimestamp:2025-12-08 19:29:10.452232717 +0000 UTC m=+2.745078174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.439097 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f54317b5eb871 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.452459633 +0000 UTC m=+2.745305100,LastTimestamp:2025-12-08 19:29:10.452459633 +0000 UTC m=+2.745305100,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.451742 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54317b68db0b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.453123851 +0000 UTC m=+2.745969328,LastTimestamp:2025-12-08 19:29:10.453123851 +0000 UTC m=+2.745969328,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.459542 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f54317b713206 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.453670406 +0000 UTC m=+2.746515883,LastTimestamp:2025-12-08 19:29:10.453670406 +0000 UTC m=+2.746515883,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.460851 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54317b793086 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.45419431 +0000 UTC m=+2.747039777,LastTimestamp:2025-12-08 19:29:10.45419431 +0000 UTC m=+2.747039777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.470178 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f54317b914b66 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.455774054 +0000 UTC m=+2.748619511,LastTimestamp:2025-12-08 19:29:10.455774054 +0000 UTC m=+2.748619511,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.476480 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f54317c2e6563 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.466069859 +0000 UTC m=+2.758915316,LastTimestamp:2025-12-08 19:29:10.466069859 +0000 UTC m=+2.758915316,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.482188 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f5431867a007a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.638796922 +0000 UTC m=+2.931642379,LastTimestamp:2025-12-08 19:29:10.638796922 +0000 UTC m=+2.931642379,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.494180 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5431869aa67c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.640936572 +0000 UTC m=+2.933782029,LastTimestamp:2025-12-08 19:29:10.640936572 +0000 UTC m=+2.933782029,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.498987 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f543187304e67 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.650744423 +0000 UTC m=+2.943589880,LastTimestamp:2025-12-08 19:29:10.650744423 +0000 UTC m=+2.943589880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.503039 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f5431873df7eb openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.651639787 +0000 UTC m=+2.944485244,LastTimestamp:2025-12-08 19:29:10.651639787 +0000 UTC m=+2.944485244,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.513089 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5431877f2e53 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.655913555 +0000 UTC m=+2.948759012,LastTimestamp:2025-12-08 19:29:10.655913555 +0000 UTC m=+2.948759012,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.519872 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f543187af710a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.659076362 +0000 UTC m=+2.951921819,LastTimestamp:2025-12-08 19:29:10.659076362 +0000 UTC m=+2.951921819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.529271 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f543193be2bc7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.861368263 +0000 UTC m=+3.154213720,LastTimestamp:2025-12-08 19:29:10.861368263 +0000 UTC m=+3.154213720,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.554306 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f543193d2e547 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.862726471 +0000 UTC m=+3.155571928,LastTimestamp:2025-12-08 19:29:10.862726471 +0000 UTC m=+3.155571928,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.562008 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54319447d1c8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.870389192 +0000 UTC m=+3.163234659,LastTimestamp:2025-12-08 19:29:10.870389192 +0000 UTC m=+3.163234659,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.567802 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f543194582bf4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.871460852 +0000 UTC m=+3.164306309,LastTimestamp:2025-12-08 19:29:10.871460852 +0000 UTC m=+3.164306309,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.573915 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f543194b8221a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.877749786 +0000 UTC m=+3.170595253,LastTimestamp:2025-12-08 19:29:10.877749786 +0000 UTC m=+3.170595253,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.579068 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54319ea68cf3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:11.044369651 +0000 UTC m=+3.337215108,LastTimestamp:2025-12-08 19:29:11.044369651 +0000 UTC m=+3.337215108,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.590860 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54319f52a17e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:11.055647102 +0000 UTC m=+3.348492559,LastTimestamp:2025-12-08 19:29:11.055647102 +0000 UTC m=+3.348492559,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.595848 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54319f5f214f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:11.056466255 +0000 UTC m=+3.349311712,LastTimestamp:2025-12-08 19:29:11.056466255 +0000 UTC m=+3.349311712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.601211 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431a4dbfa43 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:11.148534339 +0000 UTC m=+3.441379806,LastTimestamp:2025-12-08 19:29:11.148534339 +0000 UTC m=+3.441379806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.610016 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5431aa73ec99 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:11.242378393 +0000 UTC m=+3.535223850,LastTimestamp:2025-12-08 19:29:11.242378393 +0000 UTC m=+3.535223850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.615037 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5431abaecb3d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:11.263013693 +0000 UTC m=+3.555859150,LastTimestamp:2025-12-08 19:29:11.263013693 +0000 UTC m=+3.555859150,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.619870 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431b2747413 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:11.376630803 +0000 UTC m=+3.669476270,LastTimestamp:2025-12-08 19:29:11.376630803 +0000 UTC m=+3.669476270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.625369 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431b331f0ae openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:11.389049006 +0000 UTC m=+3.681894463,LastTimestamp:2025-12-08 19:29:11.389049006 +0000 UTC m=+3.681894463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.633323 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431e209ec10 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.174955536 +0000 UTC m=+4.467800993,LastTimestamp:2025-12-08 19:29:12.174955536 +0000 UTC m=+4.467800993,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.637765 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431eec4ffe8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.38854244 +0000 UTC m=+4.681387917,LastTimestamp:2025-12-08 19:29:12.38854244 +0000 UTC m=+4.681387917,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.646020 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431ef6595d1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.399066577 +0000 UTC m=+4.691912074,LastTimestamp:2025-12-08 19:29:12.399066577 +0000 UTC m=+4.691912074,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.654794 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431ef7b0fdb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.400474075 +0000 UTC m=+4.693319542,LastTimestamp:2025-12-08 19:29:12.400474075 +0000 UTC m=+4.693319542,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.662113 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431fa2a8152 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.579744082 +0000 UTC m=+4.872589549,LastTimestamp:2025-12-08 19:29:12.579744082 +0000 UTC m=+4.872589549,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.666902 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431fb0560ed openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.594088173 +0000 UTC m=+4.886933630,LastTimestamp:2025-12-08 19:29:12.594088173 +0000 UTC m=+4.886933630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.674775 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431fb1683ec openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.595211244 +0000 UTC m=+4.888056701,LastTimestamp:2025-12-08 19:29:12.595211244 +0000 UTC m=+4.888056701,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.680360 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f543205430c7a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.765901946 +0000 UTC m=+5.058747413,LastTimestamp:2025-12-08 19:29:12.765901946 +0000 UTC m=+5.058747413,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.685743 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f543205f1cc8b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.777354379 +0000 UTC m=+5.070199836,LastTimestamp:2025-12-08 19:29:12.777354379 +0000 UTC m=+5.070199836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.690237 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f543206014ae7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.778369767 +0000 UTC m=+5.071215224,LastTimestamp:2025-12-08 19:29:12.778369767 +0000 UTC m=+5.071215224,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.694748 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5432139a918c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:13.006518668 +0000 UTC m=+5.299364135,LastTimestamp:2025-12-08 19:29:13.006518668 +0000 UTC m=+5.299364135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.700617 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f54321487d357 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:13.022067543 +0000 UTC m=+5.314913010,LastTimestamp:2025-12-08 19:29:13.022067543 +0000 UTC m=+5.314913010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.705898 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5432149cdf7f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:13.023446911 +0000 UTC m=+5.316292378,LastTimestamp:2025-12-08 19:29:13.023446911 +0000 UTC m=+5.316292378,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.718003 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f543223815713 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:13.273300755 +0000 UTC m=+5.566146272,LastTimestamp:2025-12-08 19:29:13.273300755 +0000 UTC m=+5.566146272,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.722101 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5432242a2d81 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:13.284365697 +0000 UTC m=+5.577211204,LastTimestamp:2025-12-08 19:29:13.284365697 +0000 UTC m=+5.577211204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.729735 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 08 19:29:27 crc kubenswrapper[5118]: &Event{ObjectMeta:{kube-controller-manager-crc.187f543423aa11fa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Dec 08 19:29:27 crc kubenswrapper[5118]: body: Dec 08 19:29:27 crc kubenswrapper[5118]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:21.865904634 +0000 UTC m=+14.158750091,LastTimestamp:2025-12-08 19:29:21.865904634 +0000 UTC m=+14.158750091,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 19:29:27 crc kubenswrapper[5118]: > Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.733513 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f543423aba31f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:21.866007327 +0000 UTC m=+14.158852784,LastTimestamp:2025-12-08 19:29:21.866007327 +0000 UTC m=+14.158852784,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.738347 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 19:29:27 crc kubenswrapper[5118]: &Event{ObjectMeta:{kube-apiserver-crc.187f54342d5886ec openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 08 19:29:27 crc kubenswrapper[5118]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 19:29:27 crc kubenswrapper[5118]: Dec 08 19:29:27 crc kubenswrapper[5118]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:22.02833278 +0000 UTC m=+14.321178237,LastTimestamp:2025-12-08 19:29:22.02833278 +0000 UTC m=+14.321178237,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 19:29:27 crc kubenswrapper[5118]: > Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.745050 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54342d59903e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:22.028400702 +0000 UTC m=+14.321246159,LastTimestamp:2025-12-08 19:29:22.028400702 +0000 UTC m=+14.321246159,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.752279 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f54342d5886ec\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 19:29:27 crc kubenswrapper[5118]: &Event{ObjectMeta:{kube-apiserver-crc.187f54342d5886ec openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 08 19:29:27 crc kubenswrapper[5118]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 19:29:27 crc kubenswrapper[5118]: Dec 08 19:29:27 crc kubenswrapper[5118]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:22.02833278 +0000 UTC m=+14.321178237,LastTimestamp:2025-12-08 19:29:22.03639161 +0000 UTC m=+14.329237087,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 19:29:27 crc kubenswrapper[5118]: > Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.756833 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f54342d59903e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54342d59903e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:22.028400702 +0000 UTC m=+14.321246159,LastTimestamp:2025-12-08 19:29:22.036542184 +0000 UTC m=+14.329387661,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.760792 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 19:29:27 crc kubenswrapper[5118]: &Event{ObjectMeta:{kube-apiserver-crc.187f54347c95675b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Dec 08 19:29:27 crc kubenswrapper[5118]: body: [+]ping ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]log ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]etcd ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/generic-apiserver-start-informers ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/priority-and-fairness-filter ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/start-apiextensions-informers ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/start-apiextensions-controllers ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/crd-informer-synced ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/start-system-namespaces-controller ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 08 19:29:27 crc kubenswrapper[5118]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/bootstrap-controller ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/start-kube-aggregator-informers ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/apiservice-registration-controller ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/apiservice-discovery-controller ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]autoregister-completion ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/apiservice-openapi-controller ok Dec 08 19:29:27 crc kubenswrapper[5118]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 08 19:29:27 crc kubenswrapper[5118]: livez check failed Dec 08 19:29:27 crc kubenswrapper[5118]: Dec 08 19:29:27 crc kubenswrapper[5118]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:23.357722459 +0000 UTC m=+15.650567926,LastTimestamp:2025-12-08 19:29:23.357722459 +0000 UTC m=+15.650567926,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 19:29:27 crc kubenswrapper[5118]: > Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.774829 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54347c966397 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:23.357787031 +0000 UTC m=+15.650632488,LastTimestamp:2025-12-08 19:29:23.357787031 +0000 UTC m=+15.650632488,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.779588 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 19:29:27 crc kubenswrapper[5118]: &Event{ObjectMeta:{kube-apiserver-crc.187f54355b0180fd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:40672->192.168.126.11:17697: read: connection reset by peer Dec 08 19:29:27 crc kubenswrapper[5118]: body: Dec 08 19:29:27 crc kubenswrapper[5118]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:27.089348861 +0000 UTC m=+19.382194358,LastTimestamp:2025-12-08 19:29:27.089348861 +0000 UTC m=+19.382194358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 19:29:27 crc kubenswrapper[5118]: > Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.786400 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54355b03b011 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40672->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:27.089491985 +0000 UTC m=+19.382337472,LastTimestamp:2025-12-08 19:29:27.089491985 +0000 UTC m=+19.382337472,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.793317 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f54319f5f214f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54319f5f214f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:11.056466255 +0000 UTC m=+3.349311712,LastTimestamp:2025-12-08 19:29:27.240205121 +0000 UTC m=+19.533050578,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.797918 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5431aa73ec99\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5431aa73ec99 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:11.242378393 +0000 UTC m=+3.535223850,LastTimestamp:2025-12-08 19:29:27.498102265 +0000 UTC m=+19.790947722,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5118]: E1208 19:29:27.802425 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5431abaecb3d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5431abaecb3d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:11.263013693 +0000 UTC m=+3.555859150,LastTimestamp:2025-12-08 19:29:27.51291698 +0000 UTC m=+19.805762437,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.002225 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.038609 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.038984 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.040116 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.040179 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.040194 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:28 crc kubenswrapper[5118]: E1208 19:29:28.040827 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.052451 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 08 19:29:28 crc kubenswrapper[5118]: E1208 19:29:28.148596 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.240871 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.242358 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.242653 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4f5929e75362c29d8b9097cafef6e1415347f9c76d1a2a5d11279a7425ed49ce"} Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.242860 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.243301 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.243327 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.243335 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:28 crc kubenswrapper[5118]: E1208 19:29:28.243739 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.243881 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.243939 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.243955 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:28 crc kubenswrapper[5118]: E1208 19:29:28.244483 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.359068 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.869335 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.869696 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.870934 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.870981 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.871004 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:28 crc kubenswrapper[5118]: E1208 19:29:28.871380 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.872897 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:28 crc kubenswrapper[5118]: I1208 19:29:28.874861 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:29 crc kubenswrapper[5118]: I1208 19:29:29.001132 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:29 crc kubenswrapper[5118]: I1208 19:29:29.244929 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:29 crc kubenswrapper[5118]: I1208 19:29:29.245036 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:29 crc kubenswrapper[5118]: I1208 19:29:29.245150 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:29 crc kubenswrapper[5118]: I1208 19:29:29.246353 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:29 crc kubenswrapper[5118]: I1208 19:29:29.246481 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:29 crc kubenswrapper[5118]: I1208 19:29:29.246505 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:29 crc kubenswrapper[5118]: I1208 19:29:29.246544 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:29 crc kubenswrapper[5118]: I1208 19:29:29.246589 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:29 crc kubenswrapper[5118]: I1208 19:29:29.246607 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:29 crc kubenswrapper[5118]: E1208 19:29:29.247090 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:29 crc kubenswrapper[5118]: E1208 19:29:29.247476 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:29 crc kubenswrapper[5118]: I1208 19:29:29.252278 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:30 crc kubenswrapper[5118]: I1208 19:29:30.001315 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:30 crc kubenswrapper[5118]: I1208 19:29:30.248175 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 19:29:30 crc kubenswrapper[5118]: I1208 19:29:30.248868 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 19:29:30 crc kubenswrapper[5118]: I1208 19:29:30.250448 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4f5929e75362c29d8b9097cafef6e1415347f9c76d1a2a5d11279a7425ed49ce" exitCode=255 Dec 08 19:29:30 crc kubenswrapper[5118]: I1208 19:29:30.250632 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:30 crc kubenswrapper[5118]: I1208 19:29:30.250512 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"4f5929e75362c29d8b9097cafef6e1415347f9c76d1a2a5d11279a7425ed49ce"} Dec 08 19:29:30 crc kubenswrapper[5118]: I1208 19:29:30.250736 5118 scope.go:117] "RemoveContainer" containerID="7dbda37255dc14b3b9e3b93edd2c7db8cadf83544c8c1cc75f1802c67015635a" Dec 08 19:29:30 crc kubenswrapper[5118]: I1208 19:29:30.250963 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:30 crc kubenswrapper[5118]: I1208 19:29:30.251425 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:30 crc kubenswrapper[5118]: I1208 19:29:30.251464 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:30 crc kubenswrapper[5118]: I1208 19:29:30.251478 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:30 crc kubenswrapper[5118]: I1208 19:29:30.251760 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:30 crc kubenswrapper[5118]: E1208 19:29:30.251864 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:30 crc kubenswrapper[5118]: I1208 19:29:30.251956 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:30 crc kubenswrapper[5118]: I1208 19:29:30.252049 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:30 crc kubenswrapper[5118]: I1208 19:29:30.252170 5118 scope.go:117] "RemoveContainer" containerID="4f5929e75362c29d8b9097cafef6e1415347f9c76d1a2a5d11279a7425ed49ce" Dec 08 19:29:30 crc kubenswrapper[5118]: E1208 19:29:30.252429 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:29:30 crc kubenswrapper[5118]: E1208 19:29:30.252710 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:30 crc kubenswrapper[5118]: E1208 19:29:30.258408 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54361789ad79 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:30.252389753 +0000 UTC m=+22.545235220,LastTimestamp:2025-12-08 19:29:30.252389753 +0000 UTC m=+22.545235220,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:30 crc kubenswrapper[5118]: E1208 19:29:30.621579 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 19:29:30 crc kubenswrapper[5118]: I1208 19:29:30.882549 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:30 crc kubenswrapper[5118]: I1208 19:29:30.883354 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:30 crc kubenswrapper[5118]: I1208 19:29:30.883398 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:30 crc kubenswrapper[5118]: I1208 19:29:30.883410 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:30 crc kubenswrapper[5118]: I1208 19:29:30.883449 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:30 crc kubenswrapper[5118]: E1208 19:29:30.891493 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:29:31 crc kubenswrapper[5118]: I1208 19:29:31.000060 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:31 crc kubenswrapper[5118]: I1208 19:29:31.256128 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 19:29:31 crc kubenswrapper[5118]: I1208 19:29:31.259778 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:31 crc kubenswrapper[5118]: I1208 19:29:31.260517 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:31 crc kubenswrapper[5118]: I1208 19:29:31.260555 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:31 crc kubenswrapper[5118]: I1208 19:29:31.260568 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:31 crc kubenswrapper[5118]: E1208 19:29:31.260948 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:31 crc kubenswrapper[5118]: I1208 19:29:31.261248 5118 scope.go:117] "RemoveContainer" containerID="4f5929e75362c29d8b9097cafef6e1415347f9c76d1a2a5d11279a7425ed49ce" Dec 08 19:29:31 crc kubenswrapper[5118]: E1208 19:29:31.261466 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:29:31 crc kubenswrapper[5118]: E1208 19:29:31.270140 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f54361789ad79\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54361789ad79 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:30.252389753 +0000 UTC m=+22.545235220,LastTimestamp:2025-12-08 19:29:31.261433003 +0000 UTC m=+23.554278470,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:32 crc kubenswrapper[5118]: I1208 19:29:32.003100 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:32 crc kubenswrapper[5118]: E1208 19:29:32.137635 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 19:29:32 crc kubenswrapper[5118]: I1208 19:29:32.997093 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:33 crc kubenswrapper[5118]: E1208 19:29:33.425282 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 19:29:33 crc kubenswrapper[5118]: I1208 19:29:33.889814 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:33 crc kubenswrapper[5118]: I1208 19:29:33.890208 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:33 crc kubenswrapper[5118]: I1208 19:29:33.891802 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:33 crc kubenswrapper[5118]: I1208 19:29:33.891855 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:33 crc kubenswrapper[5118]: I1208 19:29:33.891869 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:33 crc kubenswrapper[5118]: E1208 19:29:33.892440 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:33 crc kubenswrapper[5118]: I1208 19:29:33.892774 5118 scope.go:117] "RemoveContainer" containerID="4f5929e75362c29d8b9097cafef6e1415347f9c76d1a2a5d11279a7425ed49ce" Dec 08 19:29:33 crc kubenswrapper[5118]: E1208 19:29:33.893067 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:29:33 crc kubenswrapper[5118]: E1208 19:29:33.902364 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f54361789ad79\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54361789ad79 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:30.252389753 +0000 UTC m=+22.545235220,LastTimestamp:2025-12-08 19:29:33.893031679 +0000 UTC m=+26.185877136,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:34 crc kubenswrapper[5118]: I1208 19:29:34.001058 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:35 crc kubenswrapper[5118]: I1208 19:29:35.002794 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:35 crc kubenswrapper[5118]: E1208 19:29:35.347598 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 19:29:36 crc kubenswrapper[5118]: I1208 19:29:36.002517 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:37 crc kubenswrapper[5118]: I1208 19:29:37.000370 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:37 crc kubenswrapper[5118]: E1208 19:29:37.630219 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 19:29:37 crc kubenswrapper[5118]: I1208 19:29:37.892392 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:37 crc kubenswrapper[5118]: I1208 19:29:37.893873 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:37 crc kubenswrapper[5118]: I1208 19:29:37.893996 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:37 crc kubenswrapper[5118]: I1208 19:29:37.894017 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:37 crc kubenswrapper[5118]: I1208 19:29:37.894056 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:37 crc kubenswrapper[5118]: E1208 19:29:37.910472 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:29:38 crc kubenswrapper[5118]: I1208 19:29:38.002304 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:38 crc kubenswrapper[5118]: E1208 19:29:38.149891 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:29:38 crc kubenswrapper[5118]: E1208 19:29:38.595940 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 19:29:39 crc kubenswrapper[5118]: I1208 19:29:39.003643 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:39 crc kubenswrapper[5118]: I1208 19:29:39.999504 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:41 crc kubenswrapper[5118]: I1208 19:29:41.002421 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:42 crc kubenswrapper[5118]: I1208 19:29:42.001150 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:43 crc kubenswrapper[5118]: I1208 19:29:43.003073 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:44 crc kubenswrapper[5118]: E1208 19:29:44.000161 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 19:29:44 crc kubenswrapper[5118]: I1208 19:29:44.000257 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:44 crc kubenswrapper[5118]: E1208 19:29:44.636926 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 19:29:44 crc kubenswrapper[5118]: I1208 19:29:44.910936 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:44 crc kubenswrapper[5118]: I1208 19:29:44.912132 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:44 crc kubenswrapper[5118]: I1208 19:29:44.912175 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:44 crc kubenswrapper[5118]: I1208 19:29:44.912188 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:44 crc kubenswrapper[5118]: I1208 19:29:44.912215 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:44 crc kubenswrapper[5118]: E1208 19:29:44.927839 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:29:45 crc kubenswrapper[5118]: I1208 19:29:45.003424 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:46 crc kubenswrapper[5118]: I1208 19:29:46.001029 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:46 crc kubenswrapper[5118]: I1208 19:29:46.096903 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:46 crc kubenswrapper[5118]: I1208 19:29:46.098602 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:46 crc kubenswrapper[5118]: I1208 19:29:46.098669 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:46 crc kubenswrapper[5118]: I1208 19:29:46.098736 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:46 crc kubenswrapper[5118]: E1208 19:29:46.099378 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:46 crc kubenswrapper[5118]: I1208 19:29:46.099832 5118 scope.go:117] "RemoveContainer" containerID="4f5929e75362c29d8b9097cafef6e1415347f9c76d1a2a5d11279a7425ed49ce" Dec 08 19:29:46 crc kubenswrapper[5118]: E1208 19:29:46.110118 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f54319f5f214f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54319f5f214f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:11.056466255 +0000 UTC m=+3.349311712,LastTimestamp:2025-12-08 19:29:46.101345574 +0000 UTC m=+38.394191071,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:46 crc kubenswrapper[5118]: E1208 19:29:46.319109 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5431aa73ec99\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5431aa73ec99 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:11.242378393 +0000 UTC m=+3.535223850,LastTimestamp:2025-12-08 19:29:46.314341951 +0000 UTC m=+38.607187418,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:46 crc kubenswrapper[5118]: E1208 19:29:46.331002 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5431abaecb3d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5431abaecb3d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:11.263013693 +0000 UTC m=+3.555859150,LastTimestamp:2025-12-08 19:29:46.326185834 +0000 UTC m=+38.619031311,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:47 crc kubenswrapper[5118]: I1208 19:29:47.004933 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:47 crc kubenswrapper[5118]: I1208 19:29:47.306681 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 19:29:47 crc kubenswrapper[5118]: I1208 19:29:47.308661 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d51eea9fda4395037ee1cf288d11a25f24244dc0a125119395d04a02318bbd20"} Dec 08 19:29:47 crc kubenswrapper[5118]: I1208 19:29:47.308984 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:47 crc kubenswrapper[5118]: I1208 19:29:47.309843 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:47 crc kubenswrapper[5118]: I1208 19:29:47.309889 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:47 crc kubenswrapper[5118]: I1208 19:29:47.309902 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:47 crc kubenswrapper[5118]: E1208 19:29:47.310295 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:48 crc kubenswrapper[5118]: I1208 19:29:48.002043 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:48 crc kubenswrapper[5118]: E1208 19:29:48.151159 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:29:48 crc kubenswrapper[5118]: I1208 19:29:48.313089 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 19:29:48 crc kubenswrapper[5118]: I1208 19:29:48.314030 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 19:29:48 crc kubenswrapper[5118]: I1208 19:29:48.316362 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d51eea9fda4395037ee1cf288d11a25f24244dc0a125119395d04a02318bbd20" exitCode=255 Dec 08 19:29:48 crc kubenswrapper[5118]: I1208 19:29:48.316439 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"d51eea9fda4395037ee1cf288d11a25f24244dc0a125119395d04a02318bbd20"} Dec 08 19:29:48 crc kubenswrapper[5118]: I1208 19:29:48.316482 5118 scope.go:117] "RemoveContainer" containerID="4f5929e75362c29d8b9097cafef6e1415347f9c76d1a2a5d11279a7425ed49ce" Dec 08 19:29:48 crc kubenswrapper[5118]: I1208 19:29:48.316830 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:48 crc kubenswrapper[5118]: I1208 19:29:48.317842 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:48 crc kubenswrapper[5118]: I1208 19:29:48.317896 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:48 crc kubenswrapper[5118]: I1208 19:29:48.317911 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:48 crc kubenswrapper[5118]: E1208 19:29:48.318317 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:48 crc kubenswrapper[5118]: I1208 19:29:48.318647 5118 scope.go:117] "RemoveContainer" containerID="d51eea9fda4395037ee1cf288d11a25f24244dc0a125119395d04a02318bbd20" Dec 08 19:29:48 crc kubenswrapper[5118]: E1208 19:29:48.318967 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:29:48 crc kubenswrapper[5118]: E1208 19:29:48.326509 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f54361789ad79\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54361789ad79 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:30.252389753 +0000 UTC m=+22.545235220,LastTimestamp:2025-12-08 19:29:48.318919112 +0000 UTC m=+40.611764579,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:48 crc kubenswrapper[5118]: I1208 19:29:48.998799 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:49 crc kubenswrapper[5118]: E1208 19:29:49.062266 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 19:29:49 crc kubenswrapper[5118]: I1208 19:29:49.320760 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 19:29:50 crc kubenswrapper[5118]: I1208 19:29:50.001516 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:50 crc kubenswrapper[5118]: I1208 19:29:50.999377 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:51 crc kubenswrapper[5118]: E1208 19:29:51.644933 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 19:29:51 crc kubenswrapper[5118]: I1208 19:29:51.928814 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:51 crc kubenswrapper[5118]: I1208 19:29:51.929947 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:51 crc kubenswrapper[5118]: I1208 19:29:51.930024 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:51 crc kubenswrapper[5118]: I1208 19:29:51.930051 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:51 crc kubenswrapper[5118]: I1208 19:29:51.930094 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:51 crc kubenswrapper[5118]: E1208 19:29:51.946772 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:29:52 crc kubenswrapper[5118]: I1208 19:29:52.002398 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:53 crc kubenswrapper[5118]: I1208 19:29:53.002795 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:53 crc kubenswrapper[5118]: E1208 19:29:53.274254 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 19:29:53 crc kubenswrapper[5118]: I1208 19:29:53.889468 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:53 crc kubenswrapper[5118]: I1208 19:29:53.889731 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:53 crc kubenswrapper[5118]: I1208 19:29:53.890930 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:53 crc kubenswrapper[5118]: I1208 19:29:53.890995 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:53 crc kubenswrapper[5118]: I1208 19:29:53.891020 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:53 crc kubenswrapper[5118]: E1208 19:29:53.891629 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:53 crc kubenswrapper[5118]: I1208 19:29:53.892125 5118 scope.go:117] "RemoveContainer" containerID="d51eea9fda4395037ee1cf288d11a25f24244dc0a125119395d04a02318bbd20" Dec 08 19:29:53 crc kubenswrapper[5118]: E1208 19:29:53.892608 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:29:53 crc kubenswrapper[5118]: E1208 19:29:53.900521 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f54361789ad79\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54361789ad79 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:30.252389753 +0000 UTC m=+22.545235220,LastTimestamp:2025-12-08 19:29:53.892559785 +0000 UTC m=+46.185405272,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:54 crc kubenswrapper[5118]: I1208 19:29:54.003977 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:55 crc kubenswrapper[5118]: I1208 19:29:55.006297 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:55 crc kubenswrapper[5118]: E1208 19:29:55.134026 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 19:29:55 crc kubenswrapper[5118]: I1208 19:29:55.807193 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:55 crc kubenswrapper[5118]: I1208 19:29:55.807509 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:55 crc kubenswrapper[5118]: I1208 19:29:55.808810 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:55 crc kubenswrapper[5118]: I1208 19:29:55.808881 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:55 crc kubenswrapper[5118]: I1208 19:29:55.808902 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:55 crc kubenswrapper[5118]: E1208 19:29:55.809459 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:56 crc kubenswrapper[5118]: I1208 19:29:56.003640 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:57 crc kubenswrapper[5118]: I1208 19:29:57.000043 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:57 crc kubenswrapper[5118]: I1208 19:29:57.309279 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:57 crc kubenswrapper[5118]: I1208 19:29:57.310014 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:57 crc kubenswrapper[5118]: I1208 19:29:57.311156 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:57 crc kubenswrapper[5118]: I1208 19:29:57.311256 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:57 crc kubenswrapper[5118]: I1208 19:29:57.311278 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:57 crc kubenswrapper[5118]: E1208 19:29:57.311965 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:57 crc kubenswrapper[5118]: I1208 19:29:57.312378 5118 scope.go:117] "RemoveContainer" containerID="d51eea9fda4395037ee1cf288d11a25f24244dc0a125119395d04a02318bbd20" Dec 08 19:29:57 crc kubenswrapper[5118]: E1208 19:29:57.312758 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:29:57 crc kubenswrapper[5118]: E1208 19:29:57.318865 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f54361789ad79\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54361789ad79 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:30.252389753 +0000 UTC m=+22.545235220,LastTimestamp:2025-12-08 19:29:57.312663419 +0000 UTC m=+49.605508916,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:58 crc kubenswrapper[5118]: I1208 19:29:58.003724 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:58 crc kubenswrapper[5118]: E1208 19:29:58.152079 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:29:58 crc kubenswrapper[5118]: E1208 19:29:58.654265 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 19:29:58 crc kubenswrapper[5118]: I1208 19:29:58.947412 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:58 crc kubenswrapper[5118]: I1208 19:29:58.949132 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:58 crc kubenswrapper[5118]: I1208 19:29:58.949216 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:58 crc kubenswrapper[5118]: I1208 19:29:58.949240 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:58 crc kubenswrapper[5118]: I1208 19:29:58.949283 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:58 crc kubenswrapper[5118]: E1208 19:29:58.961952 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:29:59 crc kubenswrapper[5118]: I1208 19:29:59.000405 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:59 crc kubenswrapper[5118]: E1208 19:29:59.018418 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 19:30:00 crc kubenswrapper[5118]: I1208 19:30:00.000254 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:01 crc kubenswrapper[5118]: I1208 19:30:01.000447 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:02 crc kubenswrapper[5118]: I1208 19:30:02.002440 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:03 crc kubenswrapper[5118]: I1208 19:30:03.004539 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:04 crc kubenswrapper[5118]: I1208 19:30:04.000555 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:05 crc kubenswrapper[5118]: I1208 19:30:05.000792 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:05 crc kubenswrapper[5118]: E1208 19:30:05.662475 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 19:30:05 crc kubenswrapper[5118]: I1208 19:30:05.962403 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:05 crc kubenswrapper[5118]: I1208 19:30:05.963874 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:05 crc kubenswrapper[5118]: I1208 19:30:05.963962 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:05 crc kubenswrapper[5118]: I1208 19:30:05.964003 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:05 crc kubenswrapper[5118]: I1208 19:30:05.964048 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:30:05 crc kubenswrapper[5118]: E1208 19:30:05.979661 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:30:06 crc kubenswrapper[5118]: I1208 19:30:06.003075 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:06 crc kubenswrapper[5118]: I1208 19:30:06.999405 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:07 crc kubenswrapper[5118]: I1208 19:30:07.999270 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:08 crc kubenswrapper[5118]: E1208 19:30:08.152602 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:30:09 crc kubenswrapper[5118]: I1208 19:30:09.000981 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:09 crc kubenswrapper[5118]: I1208 19:30:09.095724 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:09 crc kubenswrapper[5118]: I1208 19:30:09.097740 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:09 crc kubenswrapper[5118]: I1208 19:30:09.097982 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:09 crc kubenswrapper[5118]: I1208 19:30:09.098252 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:09 crc kubenswrapper[5118]: E1208 19:30:09.099421 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:30:09 crc kubenswrapper[5118]: I1208 19:30:09.100030 5118 scope.go:117] "RemoveContainer" containerID="d51eea9fda4395037ee1cf288d11a25f24244dc0a125119395d04a02318bbd20" Dec 08 19:30:09 crc kubenswrapper[5118]: E1208 19:30:09.113075 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f54319f5f214f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54319f5f214f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:11.056466255 +0000 UTC m=+3.349311712,LastTimestamp:2025-12-08 19:30:09.102213756 +0000 UTC m=+61.395059253,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:30:09 crc kubenswrapper[5118]: I1208 19:30:09.378489 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 19:30:09 crc kubenswrapper[5118]: I1208 19:30:09.380430 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef"} Dec 08 19:30:09 crc kubenswrapper[5118]: I1208 19:30:09.380657 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:09 crc kubenswrapper[5118]: I1208 19:30:09.381189 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:09 crc kubenswrapper[5118]: I1208 19:30:09.381230 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:09 crc kubenswrapper[5118]: I1208 19:30:09.381240 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:09 crc kubenswrapper[5118]: E1208 19:30:09.381554 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:30:10 crc kubenswrapper[5118]: I1208 19:30:10.002278 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:11 crc kubenswrapper[5118]: I1208 19:30:11.001669 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:11 crc kubenswrapper[5118]: I1208 19:30:11.386580 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 19:30:11 crc kubenswrapper[5118]: I1208 19:30:11.387906 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 19:30:11 crc kubenswrapper[5118]: I1208 19:30:11.389719 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef" exitCode=255 Dec 08 19:30:11 crc kubenswrapper[5118]: I1208 19:30:11.389783 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef"} Dec 08 19:30:11 crc kubenswrapper[5118]: I1208 19:30:11.389827 5118 scope.go:117] "RemoveContainer" containerID="d51eea9fda4395037ee1cf288d11a25f24244dc0a125119395d04a02318bbd20" Dec 08 19:30:11 crc kubenswrapper[5118]: I1208 19:30:11.390105 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:11 crc kubenswrapper[5118]: I1208 19:30:11.390677 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:11 crc kubenswrapper[5118]: I1208 19:30:11.390745 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:11 crc kubenswrapper[5118]: I1208 19:30:11.390763 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:11 crc kubenswrapper[5118]: E1208 19:30:11.391123 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:30:11 crc kubenswrapper[5118]: I1208 19:30:11.391426 5118 scope.go:117] "RemoveContainer" containerID="cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef" Dec 08 19:30:11 crc kubenswrapper[5118]: E1208 19:30:11.391665 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:30:11 crc kubenswrapper[5118]: E1208 19:30:11.400628 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f54361789ad79\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54361789ad79 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:30.252389753 +0000 UTC m=+22.545235220,LastTimestamp:2025-12-08 19:30:11.391628866 +0000 UTC m=+63.684474343,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:30:11 crc kubenswrapper[5118]: I1208 19:30:11.999580 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:12 crc kubenswrapper[5118]: I1208 19:30:12.394044 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 19:30:12 crc kubenswrapper[5118]: I1208 19:30:12.612201 5118 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-8s6x9" Dec 08 19:30:12 crc kubenswrapper[5118]: I1208 19:30:12.619913 5118 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-8s6x9" Dec 08 19:30:12 crc kubenswrapper[5118]: I1208 19:30:12.660122 5118 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 08 19:30:12 crc kubenswrapper[5118]: I1208 19:30:12.920403 5118 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 08 19:30:12 crc kubenswrapper[5118]: I1208 19:30:12.980124 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:12 crc kubenswrapper[5118]: I1208 19:30:12.981207 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:12 crc kubenswrapper[5118]: I1208 19:30:12.981249 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:12 crc kubenswrapper[5118]: I1208 19:30:12.981261 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:12 crc kubenswrapper[5118]: I1208 19:30:12.981416 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:30:12 crc kubenswrapper[5118]: I1208 19:30:12.993721 5118 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 08 19:30:12 crc kubenswrapper[5118]: I1208 19:30:12.993982 5118 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 08 19:30:12 crc kubenswrapper[5118]: E1208 19:30:12.994008 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 08 19:30:12 crc kubenswrapper[5118]: I1208 19:30:12.996385 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:12 crc kubenswrapper[5118]: I1208 19:30:12.996414 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:12 crc kubenswrapper[5118]: I1208 19:30:12.996424 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:12 crc kubenswrapper[5118]: I1208 19:30:12.996444 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:12 crc kubenswrapper[5118]: I1208 19:30:12.996481 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:12Z","lastTransitionTime":"2025-12-08T19:30:12Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 08 19:30:13 crc kubenswrapper[5118]: E1208 19:30:13.030539 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:12Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80ade9b2-160d-493f-aadd-1db6165f9646\\\",\\\"systemUUID\\\":\\\"38ff36e9-ea31-4d0f-b411-1d90f601ae3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.037964 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.038006 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.038018 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.038036 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.038051 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:13Z","lastTransitionTime":"2025-12-08T19:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:13 crc kubenswrapper[5118]: E1208 19:30:13.046538 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80ade9b2-160d-493f-aadd-1db6165f9646\\\",\\\"systemUUID\\\":\\\"38ff36e9-ea31-4d0f-b411-1d90f601ae3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.054645 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.054703 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.054714 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.054730 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.054742 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:13Z","lastTransitionTime":"2025-12-08T19:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:13 crc kubenswrapper[5118]: E1208 19:30:13.063939 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80ade9b2-160d-493f-aadd-1db6165f9646\\\",\\\"systemUUID\\\":\\\"38ff36e9-ea31-4d0f-b411-1d90f601ae3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.069610 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.069644 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.069657 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.069671 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.069698 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:13Z","lastTransitionTime":"2025-12-08T19:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:13 crc kubenswrapper[5118]: E1208 19:30:13.076793 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80ade9b2-160d-493f-aadd-1db6165f9646\\\",\\\"systemUUID\\\":\\\"38ff36e9-ea31-4d0f-b411-1d90f601ae3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:13 crc kubenswrapper[5118]: E1208 19:30:13.076992 5118 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 19:30:13 crc kubenswrapper[5118]: E1208 19:30:13.077014 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:13 crc kubenswrapper[5118]: E1208 19:30:13.177988 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:13 crc kubenswrapper[5118]: E1208 19:30:13.278775 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:13 crc kubenswrapper[5118]: E1208 19:30:13.379105 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:13 crc kubenswrapper[5118]: E1208 19:30:13.479432 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:13 crc kubenswrapper[5118]: E1208 19:30:13.579762 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.621463 5118 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-07 19:25:12 +0000 UTC" deadline="2025-12-30 04:58:05.60913546 +0000 UTC" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.621515 5118 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="513h27m51.987623739s" Dec 08 19:30:13 crc kubenswrapper[5118]: E1208 19:30:13.680783 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:13 crc kubenswrapper[5118]: E1208 19:30:13.781786 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:13 crc kubenswrapper[5118]: E1208 19:30:13.881878 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.890165 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.890493 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.891647 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.891700 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.891716 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:13 crc kubenswrapper[5118]: E1208 19:30:13.892125 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:30:13 crc kubenswrapper[5118]: I1208 19:30:13.892424 5118 scope.go:117] "RemoveContainer" containerID="cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef" Dec 08 19:30:13 crc kubenswrapper[5118]: E1208 19:30:13.892631 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:30:13 crc kubenswrapper[5118]: E1208 19:30:13.982918 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:14 crc kubenswrapper[5118]: E1208 19:30:14.083227 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:14 crc kubenswrapper[5118]: E1208 19:30:14.183423 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:14 crc kubenswrapper[5118]: E1208 19:30:14.283954 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:14 crc kubenswrapper[5118]: E1208 19:30:14.384388 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:14 crc kubenswrapper[5118]: E1208 19:30:14.484707 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:14 crc kubenswrapper[5118]: E1208 19:30:14.585163 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:14 crc kubenswrapper[5118]: E1208 19:30:14.685287 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:14 crc kubenswrapper[5118]: E1208 19:30:14.786298 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:14 crc kubenswrapper[5118]: E1208 19:30:14.887118 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:14 crc kubenswrapper[5118]: E1208 19:30:14.987596 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:15 crc kubenswrapper[5118]: E1208 19:30:15.088046 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:15 crc kubenswrapper[5118]: E1208 19:30:15.188508 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:15 crc kubenswrapper[5118]: E1208 19:30:15.289753 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:15 crc kubenswrapper[5118]: E1208 19:30:15.390010 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:15 crc kubenswrapper[5118]: E1208 19:30:15.490643 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:15 crc kubenswrapper[5118]: E1208 19:30:15.591776 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:15 crc kubenswrapper[5118]: E1208 19:30:15.692167 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:15 crc kubenswrapper[5118]: E1208 19:30:15.793211 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:15 crc kubenswrapper[5118]: E1208 19:30:15.893684 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:15 crc kubenswrapper[5118]: E1208 19:30:15.994654 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:16 crc kubenswrapper[5118]: E1208 19:30:16.095877 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:16 crc kubenswrapper[5118]: E1208 19:30:16.196945 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:16 crc kubenswrapper[5118]: E1208 19:30:16.297804 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:16 crc kubenswrapper[5118]: E1208 19:30:16.398394 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:16 crc kubenswrapper[5118]: E1208 19:30:16.498911 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:16 crc kubenswrapper[5118]: E1208 19:30:16.599990 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:16 crc kubenswrapper[5118]: E1208 19:30:16.700930 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:16 crc kubenswrapper[5118]: E1208 19:30:16.801731 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:16 crc kubenswrapper[5118]: E1208 19:30:16.901809 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:17 crc kubenswrapper[5118]: E1208 19:30:17.001920 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:17 crc kubenswrapper[5118]: E1208 19:30:17.103082 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:17 crc kubenswrapper[5118]: E1208 19:30:17.204228 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:17 crc kubenswrapper[5118]: E1208 19:30:17.305357 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:17 crc kubenswrapper[5118]: E1208 19:30:17.406540 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:17 crc kubenswrapper[5118]: E1208 19:30:17.506754 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:17 crc kubenswrapper[5118]: E1208 19:30:17.607736 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:17 crc kubenswrapper[5118]: E1208 19:30:17.708112 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:17 crc kubenswrapper[5118]: E1208 19:30:17.809050 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:17 crc kubenswrapper[5118]: E1208 19:30:17.910072 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5118]: E1208 19:30:18.010775 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5118]: E1208 19:30:18.111542 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5118]: E1208 19:30:18.153067 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5118]: E1208 19:30:18.212007 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5118]: E1208 19:30:18.313186 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5118]: E1208 19:30:18.413975 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5118]: E1208 19:30:18.514383 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5118]: E1208 19:30:18.615404 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5118]: E1208 19:30:18.716511 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5118]: E1208 19:30:18.816661 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5118]: E1208 19:30:18.917271 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5118]: E1208 19:30:19.018349 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5118]: E1208 19:30:19.119375 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5118]: E1208 19:30:19.219723 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5118]: E1208 19:30:19.320427 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5118]: I1208 19:30:19.381791 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:30:19 crc kubenswrapper[5118]: I1208 19:30:19.382085 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:19 crc kubenswrapper[5118]: I1208 19:30:19.383007 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:19 crc kubenswrapper[5118]: I1208 19:30:19.383043 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:19 crc kubenswrapper[5118]: I1208 19:30:19.383056 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:19 crc kubenswrapper[5118]: E1208 19:30:19.383498 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:30:19 crc kubenswrapper[5118]: I1208 19:30:19.383716 5118 scope.go:117] "RemoveContainer" containerID="cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef" Dec 08 19:30:19 crc kubenswrapper[5118]: E1208 19:30:19.383899 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:30:19 crc kubenswrapper[5118]: E1208 19:30:19.421217 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5118]: E1208 19:30:19.522154 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5118]: E1208 19:30:19.622323 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5118]: E1208 19:30:19.723044 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5118]: E1208 19:30:19.824007 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5118]: E1208 19:30:19.924336 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:20 crc kubenswrapper[5118]: E1208 19:30:20.025464 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:20 crc kubenswrapper[5118]: E1208 19:30:20.125926 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:20 crc kubenswrapper[5118]: E1208 19:30:20.226853 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:20 crc kubenswrapper[5118]: E1208 19:30:20.327504 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:20 crc kubenswrapper[5118]: E1208 19:30:20.427868 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:20 crc kubenswrapper[5118]: E1208 19:30:20.528196 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:20 crc kubenswrapper[5118]: E1208 19:30:20.628291 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:20 crc kubenswrapper[5118]: E1208 19:30:20.729351 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:20 crc kubenswrapper[5118]: E1208 19:30:20.829718 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:20 crc kubenswrapper[5118]: E1208 19:30:20.930290 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:21 crc kubenswrapper[5118]: E1208 19:30:21.031039 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:21 crc kubenswrapper[5118]: E1208 19:30:21.132028 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:21 crc kubenswrapper[5118]: E1208 19:30:21.233181 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:21 crc kubenswrapper[5118]: E1208 19:30:21.333986 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:21 crc kubenswrapper[5118]: I1208 19:30:21.372776 5118 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:30:21 crc kubenswrapper[5118]: E1208 19:30:21.435119 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:21 crc kubenswrapper[5118]: E1208 19:30:21.535611 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:21 crc kubenswrapper[5118]: E1208 19:30:21.636653 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:21 crc kubenswrapper[5118]: E1208 19:30:21.737037 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:21 crc kubenswrapper[5118]: E1208 19:30:21.837455 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:21 crc kubenswrapper[5118]: E1208 19:30:21.938754 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:22 crc kubenswrapper[5118]: E1208 19:30:22.039599 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:22 crc kubenswrapper[5118]: E1208 19:30:22.140365 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:22 crc kubenswrapper[5118]: E1208 19:30:22.241058 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:22 crc kubenswrapper[5118]: E1208 19:30:22.341399 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:22 crc kubenswrapper[5118]: E1208 19:30:22.441930 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:22 crc kubenswrapper[5118]: E1208 19:30:22.542778 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:22 crc kubenswrapper[5118]: E1208 19:30:22.643528 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:22 crc kubenswrapper[5118]: E1208 19:30:22.743627 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:22 crc kubenswrapper[5118]: E1208 19:30:22.843987 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:22 crc kubenswrapper[5118]: E1208 19:30:22.944497 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:23 crc kubenswrapper[5118]: E1208 19:30:23.044848 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:23 crc kubenswrapper[5118]: E1208 19:30:23.145711 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:23 crc kubenswrapper[5118]: E1208 19:30:23.216538 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 08 19:30:23 crc kubenswrapper[5118]: I1208 19:30:23.221937 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5118]: I1208 19:30:23.222019 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5118]: I1208 19:30:23.222048 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5118]: I1208 19:30:23.222077 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5118]: I1208 19:30:23.222101 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5118]: E1208 19:30:23.242325 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80ade9b2-160d-493f-aadd-1db6165f9646\\\",\\\"systemUUID\\\":\\\"38ff36e9-ea31-4d0f-b411-1d90f601ae3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5118]: I1208 19:30:23.248135 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5118]: I1208 19:30:23.248502 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5118]: I1208 19:30:23.248776 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5118]: I1208 19:30:23.249064 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5118]: I1208 19:30:23.249268 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5118]: E1208 19:30:23.266306 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80ade9b2-160d-493f-aadd-1db6165f9646\\\",\\\"systemUUID\\\":\\\"38ff36e9-ea31-4d0f-b411-1d90f601ae3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5118]: I1208 19:30:23.271859 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5118]: I1208 19:30:23.271938 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5118]: I1208 19:30:23.271967 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5118]: I1208 19:30:23.271999 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5118]: I1208 19:30:23.272019 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5118]: E1208 19:30:23.286977 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80ade9b2-160d-493f-aadd-1db6165f9646\\\",\\\"systemUUID\\\":\\\"38ff36e9-ea31-4d0f-b411-1d90f601ae3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5118]: I1208 19:30:23.291123 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5118]: I1208 19:30:23.291195 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5118]: I1208 19:30:23.291224 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5118]: I1208 19:30:23.291255 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5118]: I1208 19:30:23.291278 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5118]: E1208 19:30:23.307728 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80ade9b2-160d-493f-aadd-1db6165f9646\\\",\\\"systemUUID\\\":\\\"38ff36e9-ea31-4d0f-b411-1d90f601ae3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5118]: E1208 19:30:23.308120 5118 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 19:30:23 crc kubenswrapper[5118]: E1208 19:30:23.308177 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:23 crc kubenswrapper[5118]: E1208 19:30:23.409314 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:23 crc kubenswrapper[5118]: E1208 19:30:23.509837 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:23 crc kubenswrapper[5118]: E1208 19:30:23.610083 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:23 crc kubenswrapper[5118]: E1208 19:30:23.711163 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:23 crc kubenswrapper[5118]: E1208 19:30:23.812192 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:23 crc kubenswrapper[5118]: E1208 19:30:23.913131 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:24 crc kubenswrapper[5118]: E1208 19:30:24.013969 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:24 crc kubenswrapper[5118]: E1208 19:30:24.115018 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:24 crc kubenswrapper[5118]: E1208 19:30:24.215305 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:24 crc kubenswrapper[5118]: E1208 19:30:24.316388 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:24 crc kubenswrapper[5118]: E1208 19:30:24.416910 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:24 crc kubenswrapper[5118]: E1208 19:30:24.517962 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:24 crc kubenswrapper[5118]: E1208 19:30:24.618905 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:24 crc kubenswrapper[5118]: E1208 19:30:24.719446 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:24 crc kubenswrapper[5118]: E1208 19:30:24.819776 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:24 crc kubenswrapper[5118]: E1208 19:30:24.920645 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:25 crc kubenswrapper[5118]: E1208 19:30:25.021614 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:25 crc kubenswrapper[5118]: E1208 19:30:25.121807 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:25 crc kubenswrapper[5118]: E1208 19:30:25.222633 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:25 crc kubenswrapper[5118]: E1208 19:30:25.323036 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:25 crc kubenswrapper[5118]: E1208 19:30:25.423429 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:25 crc kubenswrapper[5118]: E1208 19:30:25.524134 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:25 crc kubenswrapper[5118]: E1208 19:30:25.624896 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:25 crc kubenswrapper[5118]: E1208 19:30:25.725541 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:25 crc kubenswrapper[5118]: E1208 19:30:25.825740 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:25 crc kubenswrapper[5118]: E1208 19:30:25.926167 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:26 crc kubenswrapper[5118]: E1208 19:30:26.027223 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:26 crc kubenswrapper[5118]: I1208 19:30:26.095985 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:26 crc kubenswrapper[5118]: I1208 19:30:26.097027 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:26 crc kubenswrapper[5118]: I1208 19:30:26.097081 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:26 crc kubenswrapper[5118]: I1208 19:30:26.097098 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:26 crc kubenswrapper[5118]: E1208 19:30:26.097563 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:30:26 crc kubenswrapper[5118]: E1208 19:30:26.127417 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:26 crc kubenswrapper[5118]: E1208 19:30:26.228376 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:26 crc kubenswrapper[5118]: E1208 19:30:26.328918 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:26 crc kubenswrapper[5118]: E1208 19:30:26.429720 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:26 crc kubenswrapper[5118]: E1208 19:30:26.530412 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:26 crc kubenswrapper[5118]: E1208 19:30:26.631716 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:26 crc kubenswrapper[5118]: E1208 19:30:26.732867 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:26 crc kubenswrapper[5118]: E1208 19:30:26.832973 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:26 crc kubenswrapper[5118]: E1208 19:30:26.933893 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:27 crc kubenswrapper[5118]: E1208 19:30:27.034681 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:27 crc kubenswrapper[5118]: E1208 19:30:27.135225 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:27 crc kubenswrapper[5118]: E1208 19:30:27.235763 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:27 crc kubenswrapper[5118]: E1208 19:30:27.336928 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:27 crc kubenswrapper[5118]: E1208 19:30:27.438042 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:27 crc kubenswrapper[5118]: E1208 19:30:27.538981 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:27 crc kubenswrapper[5118]: E1208 19:30:27.640369 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:27 crc kubenswrapper[5118]: E1208 19:30:27.741188 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:27 crc kubenswrapper[5118]: E1208 19:30:27.841527 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:27 crc kubenswrapper[5118]: E1208 19:30:27.942248 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:28 crc kubenswrapper[5118]: E1208 19:30:28.044169 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:28 crc kubenswrapper[5118]: E1208 19:30:28.145028 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:28 crc kubenswrapper[5118]: E1208 19:30:28.153510 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:30:28 crc kubenswrapper[5118]: E1208 19:30:28.245537 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:28 crc kubenswrapper[5118]: E1208 19:30:28.346486 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:28 crc kubenswrapper[5118]: E1208 19:30:28.447341 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:28 crc kubenswrapper[5118]: E1208 19:30:28.548064 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:28 crc kubenswrapper[5118]: E1208 19:30:28.648464 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:28 crc kubenswrapper[5118]: E1208 19:30:28.748904 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:28 crc kubenswrapper[5118]: E1208 19:30:28.849981 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:28 crc kubenswrapper[5118]: E1208 19:30:28.950503 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:29 crc kubenswrapper[5118]: E1208 19:30:29.050865 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:29 crc kubenswrapper[5118]: E1208 19:30:29.151737 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:29 crc kubenswrapper[5118]: E1208 19:30:29.251963 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:29 crc kubenswrapper[5118]: E1208 19:30:29.352464 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:29 crc kubenswrapper[5118]: E1208 19:30:29.453409 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:29 crc kubenswrapper[5118]: E1208 19:30:29.554025 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:29 crc kubenswrapper[5118]: E1208 19:30:29.654490 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:29 crc kubenswrapper[5118]: E1208 19:30:29.754736 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:29 crc kubenswrapper[5118]: E1208 19:30:29.855532 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:29 crc kubenswrapper[5118]: E1208 19:30:29.956087 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:30 crc kubenswrapper[5118]: E1208 19:30:30.056971 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:30 crc kubenswrapper[5118]: I1208 19:30:30.078263 5118 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:30:30 crc kubenswrapper[5118]: I1208 19:30:30.096504 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:30 crc kubenswrapper[5118]: I1208 19:30:30.097429 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:30 crc kubenswrapper[5118]: I1208 19:30:30.097511 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:30 crc kubenswrapper[5118]: I1208 19:30:30.097532 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:30 crc kubenswrapper[5118]: E1208 19:30:30.098267 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:30:30 crc kubenswrapper[5118]: I1208 19:30:30.098767 5118 scope.go:117] "RemoveContainer" containerID="cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef" Dec 08 19:30:30 crc kubenswrapper[5118]: E1208 19:30:30.099098 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:30:30 crc kubenswrapper[5118]: E1208 19:30:30.157316 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:30 crc kubenswrapper[5118]: E1208 19:30:30.257978 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:30 crc kubenswrapper[5118]: E1208 19:30:30.358484 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:30 crc kubenswrapper[5118]: E1208 19:30:30.459064 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:30 crc kubenswrapper[5118]: E1208 19:30:30.559901 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:30 crc kubenswrapper[5118]: E1208 19:30:30.660591 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:30 crc kubenswrapper[5118]: E1208 19:30:30.761506 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:30 crc kubenswrapper[5118]: E1208 19:30:30.862401 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:30 crc kubenswrapper[5118]: E1208 19:30:30.963087 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:31 crc kubenswrapper[5118]: E1208 19:30:31.063746 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:31 crc kubenswrapper[5118]: E1208 19:30:31.164462 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:31 crc kubenswrapper[5118]: E1208 19:30:31.265377 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:31 crc kubenswrapper[5118]: E1208 19:30:31.365588 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:31 crc kubenswrapper[5118]: E1208 19:30:31.466817 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:31 crc kubenswrapper[5118]: E1208 19:30:31.567639 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:31 crc kubenswrapper[5118]: E1208 19:30:31.668440 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:31 crc kubenswrapper[5118]: E1208 19:30:31.768603 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:31 crc kubenswrapper[5118]: E1208 19:30:31.869492 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:31 crc kubenswrapper[5118]: E1208 19:30:31.971029 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:32 crc kubenswrapper[5118]: E1208 19:30:32.071296 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:32 crc kubenswrapper[5118]: E1208 19:30:32.172372 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:32 crc kubenswrapper[5118]: E1208 19:30:32.273123 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:32 crc kubenswrapper[5118]: E1208 19:30:32.373856 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:32 crc kubenswrapper[5118]: E1208 19:30:32.475108 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:32 crc kubenswrapper[5118]: E1208 19:30:32.575631 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:32 crc kubenswrapper[5118]: E1208 19:30:32.676734 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:32 crc kubenswrapper[5118]: E1208 19:30:32.777874 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:32 crc kubenswrapper[5118]: E1208 19:30:32.878798 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:32 crc kubenswrapper[5118]: E1208 19:30:32.979329 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:33 crc kubenswrapper[5118]: E1208 19:30:33.080265 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:33 crc kubenswrapper[5118]: E1208 19:30:33.181073 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:33 crc kubenswrapper[5118]: E1208 19:30:33.281880 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:33 crc kubenswrapper[5118]: E1208 19:30:33.382656 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:33 crc kubenswrapper[5118]: E1208 19:30:33.483239 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:33 crc kubenswrapper[5118]: E1208 19:30:33.485474 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 08 19:30:33 crc kubenswrapper[5118]: I1208 19:30:33.490918 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5118]: I1208 19:30:33.490982 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5118]: I1208 19:30:33.491002 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5118]: I1208 19:30:33.491028 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5118]: I1208 19:30:33.491046 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5118]: E1208 19:30:33.506619 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80ade9b2-160d-493f-aadd-1db6165f9646\\\",\\\"systemUUID\\\":\\\"38ff36e9-ea31-4d0f-b411-1d90f601ae3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:33 crc kubenswrapper[5118]: I1208 19:30:33.510943 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5118]: I1208 19:30:33.510992 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5118]: I1208 19:30:33.511007 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5118]: I1208 19:30:33.511026 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5118]: I1208 19:30:33.511042 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5118]: E1208 19:30:33.523753 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80ade9b2-160d-493f-aadd-1db6165f9646\\\",\\\"systemUUID\\\":\\\"38ff36e9-ea31-4d0f-b411-1d90f601ae3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:33 crc kubenswrapper[5118]: I1208 19:30:33.528837 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5118]: I1208 19:30:33.528984 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5118]: I1208 19:30:33.529099 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5118]: I1208 19:30:33.529200 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5118]: I1208 19:30:33.529303 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5118]: E1208 19:30:33.538575 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80ade9b2-160d-493f-aadd-1db6165f9646\\\",\\\"systemUUID\\\":\\\"38ff36e9-ea31-4d0f-b411-1d90f601ae3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:33 crc kubenswrapper[5118]: I1208 19:30:33.542175 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5118]: I1208 19:30:33.542248 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5118]: I1208 19:30:33.542261 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5118]: I1208 19:30:33.542279 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5118]: I1208 19:30:33.542315 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5118]: E1208 19:30:33.556780 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80ade9b2-160d-493f-aadd-1db6165f9646\\\",\\\"systemUUID\\\":\\\"38ff36e9-ea31-4d0f-b411-1d90f601ae3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:33 crc kubenswrapper[5118]: E1208 19:30:33.556911 5118 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 19:30:33 crc kubenswrapper[5118]: E1208 19:30:33.583778 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:33 crc kubenswrapper[5118]: E1208 19:30:33.684635 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:33 crc kubenswrapper[5118]: E1208 19:30:33.785809 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:33 crc kubenswrapper[5118]: E1208 19:30:33.886088 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:33 crc kubenswrapper[5118]: E1208 19:30:33.986666 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:34 crc kubenswrapper[5118]: E1208 19:30:34.087890 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:34 crc kubenswrapper[5118]: E1208 19:30:34.188312 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:34 crc kubenswrapper[5118]: E1208 19:30:34.289450 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:34 crc kubenswrapper[5118]: E1208 19:30:34.390547 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:34 crc kubenswrapper[5118]: E1208 19:30:34.491262 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:34 crc kubenswrapper[5118]: E1208 19:30:34.592398 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:34 crc kubenswrapper[5118]: E1208 19:30:34.693572 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:34 crc kubenswrapper[5118]: E1208 19:30:34.794516 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:34 crc kubenswrapper[5118]: E1208 19:30:34.895642 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:34 crc kubenswrapper[5118]: E1208 19:30:34.996493 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:35 crc kubenswrapper[5118]: E1208 19:30:35.096756 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:35 crc kubenswrapper[5118]: E1208 19:30:35.196993 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:35 crc kubenswrapper[5118]: E1208 19:30:35.298057 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:35 crc kubenswrapper[5118]: E1208 19:30:35.399155 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:35 crc kubenswrapper[5118]: E1208 19:30:35.500094 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:35 crc kubenswrapper[5118]: E1208 19:30:35.600425 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:35 crc kubenswrapper[5118]: E1208 19:30:35.700963 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:35 crc kubenswrapper[5118]: E1208 19:30:35.801939 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:35 crc kubenswrapper[5118]: E1208 19:30:35.902805 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:36 crc kubenswrapper[5118]: E1208 19:30:36.003782 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:36 crc kubenswrapper[5118]: E1208 19:30:36.104509 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:36 crc kubenswrapper[5118]: E1208 19:30:36.205423 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:36 crc kubenswrapper[5118]: E1208 19:30:36.306374 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:36 crc kubenswrapper[5118]: E1208 19:30:36.407425 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:36 crc kubenswrapper[5118]: E1208 19:30:36.508341 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:36 crc kubenswrapper[5118]: E1208 19:30:36.609419 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:36 crc kubenswrapper[5118]: E1208 19:30:36.709991 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:36 crc kubenswrapper[5118]: E1208 19:30:36.810208 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:36 crc kubenswrapper[5118]: E1208 19:30:36.911158 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:37 crc kubenswrapper[5118]: E1208 19:30:37.012250 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:37 crc kubenswrapper[5118]: E1208 19:30:37.113253 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:37 crc kubenswrapper[5118]: E1208 19:30:37.213973 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:37 crc kubenswrapper[5118]: E1208 19:30:37.314082 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:37 crc kubenswrapper[5118]: E1208 19:30:37.415170 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:37 crc kubenswrapper[5118]: E1208 19:30:37.516006 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:37 crc kubenswrapper[5118]: E1208 19:30:37.617005 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:37 crc kubenswrapper[5118]: E1208 19:30:37.717631 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:37 crc kubenswrapper[5118]: E1208 19:30:37.819189 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:37 crc kubenswrapper[5118]: E1208 19:30:37.919773 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:38 crc kubenswrapper[5118]: E1208 19:30:38.020249 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.096607 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.098736 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.098811 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.098837 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5118]: E1208 19:30:38.099847 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:30:38 crc kubenswrapper[5118]: E1208 19:30:38.120484 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:38 crc kubenswrapper[5118]: E1208 19:30:38.154038 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:30:38 crc kubenswrapper[5118]: E1208 19:30:38.220591 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:38 crc kubenswrapper[5118]: E1208 19:30:38.321720 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:38 crc kubenswrapper[5118]: E1208 19:30:38.422276 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:38 crc kubenswrapper[5118]: E1208 19:30:38.523208 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.602004 5118 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.609883 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.623838 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.625789 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.625843 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.625869 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.625902 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.625925 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.638586 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.728684 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.728820 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.728840 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.728872 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.728900 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.739886 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.832078 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.832183 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.832202 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.832226 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.832247 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.837372 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.934509 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.934584 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.934611 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.934643 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5118]: I1208 19:30:38.934666 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.038119 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.038193 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.038212 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.038235 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.038255 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.041461 5118 apiserver.go:52] "Watching apiserver" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.054408 5118 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.057265 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-5jnd7","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-multus/multus-additional-cni-plugins-xg8tn","openshift-multus/network-metrics-daemon-qmvkf","openshift-network-node-identity/network-node-identity-dgvkt","openshift-ovn-kubernetes/ovnkube-node-k6klf","openshift-image-registry/node-ca-7g24j","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/machine-config-daemon-twnt9","openshift-multus/multus-j4b8g","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-dns/node-resolver-fp8c5","openshift-etcd/etcd-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.058456 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.059632 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.059822 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.060270 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.061374 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.061806 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.062866 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.063015 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.063057 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.063246 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.063389 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.063724 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.064062 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.064513 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.064517 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.064672 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.075473 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.075502 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fp8c5" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.075591 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.078389 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.078772 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.079035 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.079565 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.084221 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.084293 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.084386 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.084387 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.084519 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.084533 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.088010 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7g24j" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.088163 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.088223 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.090902 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.091061 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.091195 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.091551 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.091730 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.091761 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.091795 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.092216 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.092731 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.096242 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.097740 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.099785 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.099962 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.100095 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qmvkf" podUID="b9693139-63f6-471e-ae19-744460a6b114" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.101781 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.104170 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.104453 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.104868 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.105928 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.106253 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.106551 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.106896 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.108974 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.109223 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.110471 5118 scope.go:117] "RemoveContainer" containerID="cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.111067 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.114470 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.115067 5118 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.116882 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.125919 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.126002 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.126046 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.126089 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.126126 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.126162 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.126201 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.126237 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.126274 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.126310 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.126350 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.126389 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.126425 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.126460 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.126567 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.127151 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.127587 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.127672 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.127770 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.127843 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.127884 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.127947 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.128013 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.128064 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.128115 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.128162 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.128223 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.128814 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.129173 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.129207 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.129656 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.128446 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.130151 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.130215 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.130275 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.130331 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.130387 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.130452 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.129956 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.130512 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.130571 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.130593 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.130628 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.130738 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.130799 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.130854 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.130440 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.130914 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.130971 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.131227 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.131286 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.131212 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.131408 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.131475 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.131542 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.131553 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.131622 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.131728 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.131931 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.132012 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.132074 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.132131 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.132187 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.132244 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.132298 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.132349 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.132401 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.132457 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.132517 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.132573 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.132662 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.132787 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.132849 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.132915 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.132967 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.133030 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.133100 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.133304 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.133367 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.133878 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.134631 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.134685 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.134745 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.134736 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.134770 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.134967 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.135828 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.136226 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.136271 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.136424 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.136519 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:30:39.636462932 +0000 UTC m=+91.929308429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.136576 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.136576 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.136753 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.136789 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.136949 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.136991 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.137264 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.137557 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.137752 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.137821 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.137916 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.138096 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.138150 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.138188 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.138322 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.138343 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.138935 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.139011 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.139628 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.139614 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.139741 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.139776 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.139786 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.139742 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xg8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.139865 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.139882 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.139909 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.139939 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.140012 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.140050 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.140144 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.140227 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.140512 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.140651 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.140848 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.140906 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.141239 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.141236 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.141609 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.141839 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.142042 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.142102 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.142103 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.142121 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.142212 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.142057 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.142226 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.142134 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.142328 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.142397 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.142476 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.142519 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.142557 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.142594 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.142758 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.142808 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.143212 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.143298 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.143495 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.143518 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.143660 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.143756 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.143816 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.143866 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.143868 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.143941 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.143978 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144046 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144126 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144268 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144434 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144469 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144496 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144547 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144749 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144804 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144834 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144871 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144895 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144915 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144941 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144962 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144981 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144998 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145014 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145040 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145059 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145081 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145115 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145141 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145166 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145191 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145219 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145249 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145275 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145297 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145405 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145435 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145473 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145499 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145521 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145550 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144431 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145759 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144680 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144749 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.144828 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145216 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145376 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145906 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.145932 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.146436 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.146911 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.146967 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.147011 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.147058 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.147204 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.147254 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.147298 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.147340 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.147385 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.147426 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.147466 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.147508 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.147548 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.147589 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.147628 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.147669 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.147751 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.147846 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.147888 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.147928 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.147970 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.148008 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.148258 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.148293 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.148327 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.148359 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.148389 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.148419 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.148445 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.148476 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.148506 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.148536 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.147798 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.148265 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.148950 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.148643 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.149147 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.149237 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.149415 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.149484 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.149507 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.150033 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.150081 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.151334 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.151366 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.151418 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.151784 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.151803 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.151871 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152061 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152102 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152257 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152313 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.148567 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152407 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152438 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152437 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152458 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152483 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152507 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152529 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152548 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152566 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152623 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152644 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152661 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152680 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152726 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152753 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152773 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152786 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152794 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152891 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152934 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.152963 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.153009 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.153224 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.153505 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.153537 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.153538 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.153563 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.153590 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.153617 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.153642 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.153678 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.153762 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.153798 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.153833 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.153851 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.153866 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.153924 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.153956 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.153983 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.154996 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.155042 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.155070 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.155103 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.155129 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.155155 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.155187 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.155370 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.155400 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.156346 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.156389 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.156748 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.156810 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.153981 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.154001 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.154326 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.154384 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.154468 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.156845 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.154549 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.154569 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.154743 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.154898 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.154912 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.154958 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.155060 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.155485 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.155576 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.155613 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.155915 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.157024 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.156083 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.156168 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.156263 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.156375 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.156386 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.156393 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.156662 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.157451 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.157768 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.158509 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.158906 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.158930 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159051 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-fp8c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ab4495-4d65-4b3e-9a3d-bfaad21f506a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96pbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fp8c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159108 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159186 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159207 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159236 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159292 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159292 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159361 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159384 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159471 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159479 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159493 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159543 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159546 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159564 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159667 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159714 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159737 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159758 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159756 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159857 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-system-cni-dir\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159875 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159888 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159885 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.159965 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.160006 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.160027 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.160038 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-multus-cni-dir\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.160210 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.160384 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.160647 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-cnibin\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.161644 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.161886 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.161542 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.161980 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.162163 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.162850 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.162340 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.162473 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.162622 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.162653 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.162999 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.164199 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.164465 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.164644 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.164753 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.164913 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.165038 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.165062 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.165116 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.165455 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.165806 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.166082 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.166134 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.166144 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.166396 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.166531 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-host-run-netns\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.166699 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.166971 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.167058 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.167931 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-host-var-lib-cni-bin\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.168198 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-host-var-lib-kubelet\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.168268 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-multus-conf-dir\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.168327 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0052f7cb-2eab-42e7-8f98-b1544811d9c3-rootfs\") pod \"machine-config-daemon-twnt9\" (UID: \"0052f7cb-2eab-42e7-8f98-b1544811d9c3\") " pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.168360 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.168420 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.168460 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc62458c-133b-4909-91ab-b28870b78816-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-r2hg2\" (UID: \"fc62458c-133b-4909-91ab-b28870b78816\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.168482 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-log-socket\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.168499 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-host-run-k8s-cni-cncf-io\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.167116 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.167323 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.167394 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.168525 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkhpc\" (UniqueName: \"kubernetes.io/projected/688024c3-8b6c-450e-a7b2-3b3165438f4b-kube-api-access-bkhpc\") pod \"node-ca-7g24j\" (UID: \"688024c3-8b6c-450e-a7b2-3b3165438f4b\") " pod="openshift-image-registry/node-ca-7g24j" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.168559 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.167679 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.167762 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.168747 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.167885 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.168112 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.168458 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.168503 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.168837 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.168911 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.169862 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.172126 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.172400 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.172785 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.172908 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0052f7cb-2eab-42e7-8f98-b1544811d9c3-mcd-auth-proxy-config\") pod \"machine-config-daemon-twnt9\" (UID: \"0052f7cb-2eab-42e7-8f98-b1544811d9c3\") " pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.172962 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-cni-bin\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.172990 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.173010 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-cnibin\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.173032 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-run-netns\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.173053 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-run-ovn\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.173073 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-env-overrides\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.173114 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7g24j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"688024c3-8b6c-450e-a7b2-3b3165438f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkhpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7g24j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.173170 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-host-run-multus-certs\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.173255 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.173287 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96pbz\" (UniqueName: \"kubernetes.io/projected/86ab4495-4d65-4b3e-9a3d-bfaad21f506a-kube-api-access-96pbz\") pod \"node-resolver-fp8c5\" (UID: \"86ab4495-4d65-4b3e-9a3d-bfaad21f506a\") " pod="openshift-dns/node-resolver-fp8c5" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.173304 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-slash\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.173325 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.173348 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.173368 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-run-systemd\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.173590 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.173669 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-os-release\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.173761 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87shs\" (UniqueName: \"kubernetes.io/projected/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-kube-api-access-87shs\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.174082 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.174128 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-hostroot\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.174165 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-multus-daemon-config\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.174205 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0052f7cb-2eab-42e7-8f98-b1544811d9c3-proxy-tls\") pod \"machine-config-daemon-twnt9\" (UID: \"0052f7cb-2eab-42e7-8f98-b1544811d9c3\") " pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.174158 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.174732 5118 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.175416 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-systemd-units\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.175471 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.175503 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-os-release\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.175533 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pfxh\" (UniqueName: \"kubernetes.io/projected/0052f7cb-2eab-42e7-8f98-b1544811d9c3-kube-api-access-7pfxh\") pod \"machine-config-daemon-twnt9\" (UID: \"0052f7cb-2eab-42e7-8f98-b1544811d9c3\") " pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.175566 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.175592 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc62458c-133b-4909-91ab-b28870b78816-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-r2hg2\" (UID: \"fc62458c-133b-4909-91ab-b28870b78816\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.176329 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-ovn-node-metrics-cert\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.176395 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-system-cni-dir\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.176422 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-host-var-lib-cni-multus\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.176448 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-etc-openvswitch\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.176472 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-ovnkube-config\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.176897 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-cni-binary-copy\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.176536 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.177324 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.177546 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/688024c3-8b6c-450e-a7b2-3b3165438f4b-serviceca\") pod \"node-ca-7g24j\" (UID: \"688024c3-8b6c-450e-a7b2-3b3165438f4b\") " pod="openshift-image-registry/node-ca-7g24j" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.177578 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/86ab4495-4d65-4b3e-9a3d-bfaad21f506a-hosts-file\") pod \"node-resolver-fp8c5\" (UID: \"86ab4495-4d65-4b3e-9a3d-bfaad21f506a\") " pod="openshift-dns/node-resolver-fp8c5" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.178915 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-var-lib-openvswitch\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.178952 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-cni-netd\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.178951 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.178983 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqt29\" (UniqueName: \"kubernetes.io/projected/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-kube-api-access-nqt29\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.179028 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs\") pod \"network-metrics-daemon-qmvkf\" (UID: \"b9693139-63f6-471e-ae19-744460a6b114\") " pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.179420 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.179740 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.179767 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-multus-socket-dir-parent\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.179792 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-etc-kubernetes\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.179821 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.179846 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h27mb\" (UniqueName: \"kubernetes.io/projected/fc62458c-133b-4909-91ab-b28870b78816-kube-api-access-h27mb\") pod \"ovnkube-control-plane-57b78d8988-r2hg2\" (UID: \"fc62458c-133b-4909-91ab-b28870b78816\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.179876 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-ovnkube-script-lib\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.179898 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.179932 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c652h\" (UniqueName: \"kubernetes.io/projected/b9693139-63f6-471e-ae19-744460a6b114-kube-api-access-c652h\") pod \"network-metrics-daemon-qmvkf\" (UID: \"b9693139-63f6-471e-ae19-744460a6b114\") " pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.179952 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/86ab4495-4d65-4b3e-9a3d-bfaad21f506a-tmp-dir\") pod \"node-resolver-fp8c5\" (UID: \"86ab4495-4d65-4b3e-9a3d-bfaad21f506a\") " pod="openshift-dns/node-resolver-fp8c5" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.179971 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-kubelet\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180000 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r955p\" (UniqueName: \"kubernetes.io/projected/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-kube-api-access-r955p\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180021 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/688024c3-8b6c-450e-a7b2-3b3165438f4b-host\") pod \"node-ca-7g24j\" (UID: \"688024c3-8b6c-450e-a7b2-3b3165438f4b\") " pod="openshift-image-registry/node-ca-7g24j" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180045 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc62458c-133b-4909-91ab-b28870b78816-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-r2hg2\" (UID: \"fc62458c-133b-4909-91ab-b28870b78816\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180063 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-run-openvswitch\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180084 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-node-log\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180456 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180488 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180510 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-cni-binary-copy\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180729 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180748 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180761 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180774 5118 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180786 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180797 5118 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180810 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180822 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180836 5118 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180879 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180892 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180904 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180916 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180933 5118 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180946 5118 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180959 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180973 5118 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180987 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.180999 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181014 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181026 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181039 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181054 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181066 5118 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181077 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181091 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181104 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181117 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181130 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181142 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181153 5118 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181164 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181176 5118 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181187 5118 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181198 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181211 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181224 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181236 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181247 5118 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181259 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181273 5118 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181285 5118 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181298 5118 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181310 5118 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181324 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181337 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181350 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181365 5118 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181379 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181391 5118 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181403 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181415 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181430 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181443 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181456 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181469 5118 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181482 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181494 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181506 5118 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181517 5118 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181529 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.181725 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.181867 5118 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.181959 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:39.681932124 +0000 UTC m=+91.974777581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.182581 5118 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183152 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183330 5118 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183361 5118 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183375 5118 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183387 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183401 5118 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183413 5118 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183426 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183438 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183451 5118 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183465 5118 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183477 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183490 5118 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183500 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183522 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183534 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183548 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183561 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183572 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183585 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.183760 5118 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.183849 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:39.683831686 +0000 UTC m=+91.976677153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183597 5118 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183893 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183907 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183920 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183931 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183942 5118 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183956 5118 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183968 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183980 5118 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.183993 5118 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184004 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184015 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184028 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184039 5118 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184148 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184164 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184262 5118 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184359 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184377 5118 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184512 5118 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184533 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184548 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184565 5118 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184630 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184650 5118 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184665 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184720 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184738 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184753 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184790 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184804 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184815 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184824 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184835 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184847 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184879 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184892 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184902 5118 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184913 5118 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184923 5118 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184955 5118 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184968 5118 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184977 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184988 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.184998 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185029 5118 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185041 5118 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185054 5118 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185063 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185074 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185084 5118 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185114 5118 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185126 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185136 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185145 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185157 5118 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185189 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185202 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185211 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185221 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185230 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185240 5118 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185273 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185283 5118 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185292 5118 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185301 5118 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185309 5118 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185319 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185355 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185364 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185375 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185384 5118 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185393 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185404 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185435 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185445 5118 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185455 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185464 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185474 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185483 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185513 5118 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185523 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185536 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185546 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185554 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185565 5118 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185555 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0052f7cb-2eab-42e7-8f98-b1544811d9c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7pfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7pfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-twnt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185594 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185679 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185715 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185725 5118 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185737 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185745 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185753 5118 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185763 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185795 5118 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185805 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185815 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185813 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.185825 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.186070 5118 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.186089 5118 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.186102 5118 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.186114 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.188216 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.188241 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.188256 5118 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.188432 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:39.68840486 +0000 UTC m=+91.981250317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.189250 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.189814 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.191264 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.191517 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.193498 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.193517 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.193529 5118 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.193592 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:39.693571672 +0000 UTC m=+91.986417129 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.195370 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.195945 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.197386 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.197576 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.197888 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.197952 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.197938 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.198704 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.198936 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.198964 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.199203 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.199807 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.199822 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.200026 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.200165 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.200556 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.201291 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.201539 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.201553 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.201825 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.202145 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.202229 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.202342 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.203491 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.203587 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.203605 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.204630 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.205453 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.205618 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.207212 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.207937 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.208133 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.208135 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.208246 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.208322 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.210790 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.211505 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.214699 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.215568 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.225331 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.237258 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.238712 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.241556 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.244964 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.245005 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.245017 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.245036 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.245050 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.249465 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.252901 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.259916 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-fp8c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ab4495-4d65-4b3e-9a3d-bfaad21f506a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96pbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fp8c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.268036 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7g24j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"688024c3-8b6c-450e-a7b2-3b3165438f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkhpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7g24j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.276086 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0052f7cb-2eab-42e7-8f98-b1544811d9c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7pfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7pfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-twnt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287132 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-host-var-lib-cni-multus\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287182 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-etc-openvswitch\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287214 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-ovnkube-config\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287239 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-cni-binary-copy\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287260 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-etc-openvswitch\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287266 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/688024c3-8b6c-450e-a7b2-3b3165438f4b-serviceca\") pod \"node-ca-7g24j\" (UID: \"688024c3-8b6c-450e-a7b2-3b3165438f4b\") " pod="openshift-image-registry/node-ca-7g24j" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287336 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/86ab4495-4d65-4b3e-9a3d-bfaad21f506a-hosts-file\") pod \"node-resolver-fp8c5\" (UID: \"86ab4495-4d65-4b3e-9a3d-bfaad21f506a\") " pod="openshift-dns/node-resolver-fp8c5" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287364 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-var-lib-openvswitch\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287386 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-cni-netd\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287453 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nqt29\" (UniqueName: \"kubernetes.io/projected/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-kube-api-access-nqt29\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287476 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs\") pod \"network-metrics-daemon-qmvkf\" (UID: \"b9693139-63f6-471e-ae19-744460a6b114\") " pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287520 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-multus-socket-dir-parent\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287542 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-etc-kubernetes\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287568 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h27mb\" (UniqueName: \"kubernetes.io/projected/fc62458c-133b-4909-91ab-b28870b78816-kube-api-access-h27mb\") pod \"ovnkube-control-plane-57b78d8988-r2hg2\" (UID: \"fc62458c-133b-4909-91ab-b28870b78816\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287592 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-ovnkube-script-lib\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287617 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c652h\" (UniqueName: \"kubernetes.io/projected/b9693139-63f6-471e-ae19-744460a6b114-kube-api-access-c652h\") pod \"network-metrics-daemon-qmvkf\" (UID: \"b9693139-63f6-471e-ae19-744460a6b114\") " pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287638 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/86ab4495-4d65-4b3e-9a3d-bfaad21f506a-tmp-dir\") pod \"node-resolver-fp8c5\" (UID: \"86ab4495-4d65-4b3e-9a3d-bfaad21f506a\") " pod="openshift-dns/node-resolver-fp8c5" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287663 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-kubelet\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287714 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r955p\" (UniqueName: \"kubernetes.io/projected/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-kube-api-access-r955p\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287744 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/688024c3-8b6c-450e-a7b2-3b3165438f4b-host\") pod \"node-ca-7g24j\" (UID: \"688024c3-8b6c-450e-a7b2-3b3165438f4b\") " pod="openshift-image-registry/node-ca-7g24j" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287772 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc62458c-133b-4909-91ab-b28870b78816-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-r2hg2\" (UID: \"fc62458c-133b-4909-91ab-b28870b78816\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287800 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-run-openvswitch\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287835 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-etc-kubernetes\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287905 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-cni-netd\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287909 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-var-lib-openvswitch\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.288049 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/86ab4495-4d65-4b3e-9a3d-bfaad21f506a-hosts-file\") pod \"node-resolver-fp8c5\" (UID: \"86ab4495-4d65-4b3e-9a3d-bfaad21f506a\") " pod="openshift-dns/node-resolver-fp8c5" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.288266 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-ovnkube-config\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.287236 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-host-var-lib-cni-multus\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.288393 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-multus-socket-dir-parent\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.288464 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/688024c3-8b6c-450e-a7b2-3b3165438f4b-serviceca\") pod \"node-ca-7g24j\" (UID: \"688024c3-8b6c-450e-a7b2-3b3165438f4b\") " pod="openshift-image-registry/node-ca-7g24j" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.288582 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-run-openvswitch\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.288630 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-node-log\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.288610 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/688024c3-8b6c-450e-a7b2-3b3165438f4b-host\") pod \"node-ca-7g24j\" (UID: \"688024c3-8b6c-450e-a7b2-3b3165438f4b\") " pod="openshift-image-registry/node-ca-7g24j" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.288643 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-node-log\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.288740 5118 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.288778 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.288822 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs podName:b9693139-63f6-471e-ae19-744460a6b114 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:39.788801743 +0000 UTC m=+92.081647420 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs") pod "network-metrics-daemon-qmvkf" (UID: "b9693139-63f6-471e-ae19-744460a6b114") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.288830 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-ovnkube-script-lib\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.288841 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.288847 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-cni-binary-copy\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.288858 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-kubelet\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.288924 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-system-cni-dir\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.288962 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.288996 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-system-cni-dir\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.288994 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.289069 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.289098 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.289108 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-multus-cni-dir\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.289143 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-cnibin\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.289177 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-host-run-netns\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.289285 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-cnibin\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.289352 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-host-var-lib-cni-bin\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.289423 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-host-run-netns\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.289424 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-multus-cni-dir\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.289494 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-host-var-lib-kubelet\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.289548 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-host-var-lib-cni-bin\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.289561 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-multus-conf-dir\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.289633 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0052f7cb-2eab-42e7-8f98-b1544811d9c3-rootfs\") pod \"machine-config-daemon-twnt9\" (UID: \"0052f7cb-2eab-42e7-8f98-b1544811d9c3\") " pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.289724 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0052f7cb-2eab-42e7-8f98-b1544811d9c3-rootfs\") pod \"machine-config-daemon-twnt9\" (UID: \"0052f7cb-2eab-42e7-8f98-b1544811d9c3\") " pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.289726 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-multus-conf-dir\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.289632 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-host-var-lib-kubelet\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.290024 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc62458c-133b-4909-91ab-b28870b78816-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-r2hg2\" (UID: \"fc62458c-133b-4909-91ab-b28870b78816\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.290365 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/86ab4495-4d65-4b3e-9a3d-bfaad21f506a-tmp-dir\") pod \"node-resolver-fp8c5\" (UID: \"86ab4495-4d65-4b3e-9a3d-bfaad21f506a\") " pod="openshift-dns/node-resolver-fp8c5" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.290441 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-cni-binary-copy\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.290482 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.290536 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.291044 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc62458c-133b-4909-91ab-b28870b78816-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-r2hg2\" (UID: \"fc62458c-133b-4909-91ab-b28870b78816\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.290647 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.291075 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-log-socket\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.290667 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.291214 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-host-run-k8s-cni-cncf-io\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.291224 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-log-socket\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.291470 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-cni-binary-copy\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.291557 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc62458c-133b-4909-91ab-b28870b78816-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-r2hg2\" (UID: \"fc62458c-133b-4909-91ab-b28870b78816\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.292646 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bkhpc\" (UniqueName: \"kubernetes.io/projected/688024c3-8b6c-450e-a7b2-3b3165438f4b-kube-api-access-bkhpc\") pod \"node-ca-7g24j\" (UID: \"688024c3-8b6c-450e-a7b2-3b3165438f4b\") " pod="openshift-image-registry/node-ca-7g24j" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.292016 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-host-run-k8s-cni-cncf-io\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.293580 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0052f7cb-2eab-42e7-8f98-b1544811d9c3-mcd-auth-proxy-config\") pod \"machine-config-daemon-twnt9\" (UID: \"0052f7cb-2eab-42e7-8f98-b1544811d9c3\") " pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.293678 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-cni-bin\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.293748 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-cnibin\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.293817 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-cnibin\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.293907 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-run-netns\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.293971 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-run-netns\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.294022 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-run-ovn\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.294057 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-run-ovn\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.294106 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-env-overrides\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.294742 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0052f7cb-2eab-42e7-8f98-b1544811d9c3-mcd-auth-proxy-config\") pod \"machine-config-daemon-twnt9\" (UID: \"0052f7cb-2eab-42e7-8f98-b1544811d9c3\") " pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.294829 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-cni-bin\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.294898 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-host-run-multus-certs\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.294899 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-env-overrides\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.294958 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.294985 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-96pbz\" (UniqueName: \"kubernetes.io/projected/86ab4495-4d65-4b3e-9a3d-bfaad21f506a-kube-api-access-96pbz\") pod \"node-resolver-fp8c5\" (UID: \"86ab4495-4d65-4b3e-9a3d-bfaad21f506a\") " pod="openshift-dns/node-resolver-fp8c5" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295005 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-slash\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295029 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-run-systemd\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295048 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-os-release\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295071 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-87shs\" (UniqueName: \"kubernetes.io/projected/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-kube-api-access-87shs\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295108 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-hostroot\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295128 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-multus-daemon-config\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295150 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0052f7cb-2eab-42e7-8f98-b1544811d9c3-proxy-tls\") pod \"machine-config-daemon-twnt9\" (UID: \"0052f7cb-2eab-42e7-8f98-b1544811d9c3\") " pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295171 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-systemd-units\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295191 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295212 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-os-release\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295234 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7pfxh\" (UniqueName: \"kubernetes.io/projected/0052f7cb-2eab-42e7-8f98-b1544811d9c3-kube-api-access-7pfxh\") pod \"machine-config-daemon-twnt9\" (UID: \"0052f7cb-2eab-42e7-8f98-b1544811d9c3\") " pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295257 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc62458c-133b-4909-91ab-b28870b78816-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-r2hg2\" (UID: \"fc62458c-133b-4909-91ab-b28870b78816\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295277 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-ovn-node-metrics-cert\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295297 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-system-cni-dir\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295380 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-system-cni-dir\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295403 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295433 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-host-run-multus-certs\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295450 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295465 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295478 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295491 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295504 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295516 5118 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295527 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295540 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295553 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295565 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295578 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295590 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295602 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295615 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295627 5118 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295639 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295650 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295664 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295676 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295708 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295719 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295733 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295746 5118 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295760 5118 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295777 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295789 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295800 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295811 5118 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295823 5118 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295834 5118 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295846 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295858 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295870 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295882 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295896 5118 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295907 5118 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295919 5118 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295931 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295943 5118 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.295985 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.296106 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-slash\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.296135 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-run-systemd\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.296206 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-os-release\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.296306 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-hostroot\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.296937 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-multus-daemon-config\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.300784 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-os-release\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.300881 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-systemd-units\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.300916 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.301991 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0052f7cb-2eab-42e7-8f98-b1544811d9c3-proxy-tls\") pod \"machine-config-daemon-twnt9\" (UID: \"0052f7cb-2eab-42e7-8f98-b1544811d9c3\") " pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.303973 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc62458c-133b-4909-91ab-b28870b78816-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-r2hg2\" (UID: \"fc62458c-133b-4909-91ab-b28870b78816\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.304160 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56a13789-0247-4d3a-9b22-6f0bc1a77b2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://16184f6f1982588e4aacf024dca32892985c428914dfab58baf03a3e0a296cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://87524e06886d743364c19bf1d1cbd1e8c7e9be19424206ec6b49d02a770729ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9502f9a9fc4385a11375d1454dc563a79e935d00cf846d1cba59363a82cdebf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2e23a47c43a333a7dbc87ffbd2d9968813080ef443b1706e946996bd22bd6785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://c054019c6129f78ea8bc4f9abd8a9cb3f052c4b135ce01e75b822c97ba27de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://279ee597b69db21a603f5b68418bf5407320d57b51c75b7dc9f56776ec109ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279ee597b69db21a603f5b68418bf5407320d57b51c75b7dc9f56776ec109ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://efd7bb6b3cbab052623ca5755b789764a6909190a393b9d46358d177a45dee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd7bb6b3cbab052623ca5755b789764a6909190a393b9d46358d177a45dee74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece9b7fc3121bc4f834fa2e993f2066c32baa9563e15facc11ca12149390fd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece9b7fc3121bc4f834fa2e993f2066c32baa9563e15facc11ca12149390fd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.306677 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-ovn-node-metrics-cert\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.311195 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r955p\" (UniqueName: \"kubernetes.io/projected/aa21ead9-3381-422c-b52e-4a10a3ed1bd4-kube-api-access-r955p\") pod \"multus-additional-cni-plugins-xg8tn\" (UID: \"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\") " pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.311346 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c652h\" (UniqueName: \"kubernetes.io/projected/b9693139-63f6-471e-ae19-744460a6b114-kube-api-access-c652h\") pod \"network-metrics-daemon-qmvkf\" (UID: \"b9693139-63f6-471e-ae19-744460a6b114\") " pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.312431 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqt29\" (UniqueName: \"kubernetes.io/projected/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-kube-api-access-nqt29\") pod \"ovnkube-node-k6klf\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.321101 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h27mb\" (UniqueName: \"kubernetes.io/projected/fc62458c-133b-4909-91ab-b28870b78816-kube-api-access-h27mb\") pod \"ovnkube-control-plane-57b78d8988-r2hg2\" (UID: \"fc62458c-133b-4909-91ab-b28870b78816\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.321565 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.323967 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-87shs\" (UniqueName: \"kubernetes.io/projected/1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742-kube-api-access-87shs\") pod \"multus-j4b8g\" (UID: \"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\") " pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.326029 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pfxh\" (UniqueName: \"kubernetes.io/projected/0052f7cb-2eab-42e7-8f98-b1544811d9c3-kube-api-access-7pfxh\") pod \"machine-config-daemon-twnt9\" (UID: \"0052f7cb-2eab-42e7-8f98-b1544811d9c3\") " pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.327716 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkhpc\" (UniqueName: \"kubernetes.io/projected/688024c3-8b6c-450e-a7b2-3b3165438f4b-kube-api-access-bkhpc\") pod \"node-ca-7g24j\" (UID: \"688024c3-8b6c-450e-a7b2-3b3165438f4b\") " pod="openshift-image-registry/node-ca-7g24j" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.328176 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-96pbz\" (UniqueName: \"kubernetes.io/projected/86ab4495-4d65-4b3e-9a3d-bfaad21f506a-kube-api-access-96pbz\") pod \"node-resolver-fp8c5\" (UID: \"86ab4495-4d65-4b3e-9a3d-bfaad21f506a\") " pod="openshift-dns/node-resolver-fp8c5" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.334647 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.347513 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.348178 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.348234 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.348248 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.348268 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.348279 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.358099 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qmvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9693139-63f6-471e-ae19-744460a6b114\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c652h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c652h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qmvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.377158 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc62458c-133b-4909-91ab-b28870b78816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h27mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h27mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-r2hg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.378226 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.386243 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ede7832-cf65-41a7-bd5d-aced161a948b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5f4b44aaf2cdcfc560006f179f3c73d2f8d9096fca618c7ca57c8230fd49c15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a11a3b45932159b09f23c3607b243fe3b8c6ed6ef187c042f0cf0a14d0942ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11a3b45932159b09f23c3607b243fe3b8c6ed6ef187c042f0cf0a14d0942ec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.394483 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.399586 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732d7dd3-9bf4-4e4c-9583-7cc2d66a273c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://f96b5c895f4872869c8afd92cd4a2f5eb829c355a2e35dc83c6741c426f42ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ed5e8a16f16345b28c7907efe04e4b3856cbade55bdb538fc7f3790a7e71d583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://027e114c2682c800b54ad673ffaf9a3e6d2e4b1b44a3395f348dfc94c54ddc30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://638dff118a255984c06222c27a23f2f72f75f5f45043827e4866fd6e5ad9efa6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.399887 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:39 crc kubenswrapper[5118]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:39 crc kubenswrapper[5118]: set -o allexport Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: source /etc/kubernetes/apiserver-url.env Dec 08 19:30:39 crc kubenswrapper[5118]: else Dec 08 19:30:39 crc kubenswrapper[5118]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 08 19:30:39 crc kubenswrapper[5118]: exit 1 Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 08 19:30:39 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:39 crc kubenswrapper[5118]: > logger="UnhandledError" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.401104 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 08 19:30:39 crc kubenswrapper[5118]: W1208 19:30:39.405872 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-3942bc228f881c176c12062ec015c964f86409530b4e5a7909eaac7428b4e5db WatchSource:0}: Error finding container 3942bc228f881c176c12062ec015c964f86409530b4e5a7909eaac7428b4e5db: Status 404 returned error can't find the container with id 3942bc228f881c176c12062ec015c964f86409530b4e5a7909eaac7428b4e5db Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.408844 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:39 crc kubenswrapper[5118]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: set -o allexport Dec 08 19:30:39 crc kubenswrapper[5118]: source "/env/_master" Dec 08 19:30:39 crc kubenswrapper[5118]: set +o allexport Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 08 19:30:39 crc kubenswrapper[5118]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 08 19:30:39 crc kubenswrapper[5118]: ho_enable="--enable-hybrid-overlay" Dec 08 19:30:39 crc kubenswrapper[5118]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 08 19:30:39 crc kubenswrapper[5118]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 08 19:30:39 crc kubenswrapper[5118]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 08 19:30:39 crc kubenswrapper[5118]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 19:30:39 crc kubenswrapper[5118]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 08 19:30:39 crc kubenswrapper[5118]: --webhook-host=127.0.0.1 \ Dec 08 19:30:39 crc kubenswrapper[5118]: --webhook-port=9743 \ Dec 08 19:30:39 crc kubenswrapper[5118]: ${ho_enable} \ Dec 08 19:30:39 crc kubenswrapper[5118]: --enable-interconnect \ Dec 08 19:30:39 crc kubenswrapper[5118]: --disable-approver \ Dec 08 19:30:39 crc kubenswrapper[5118]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 08 19:30:39 crc kubenswrapper[5118]: --wait-for-kubernetes-api=200s \ Dec 08 19:30:39 crc kubenswrapper[5118]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 08 19:30:39 crc kubenswrapper[5118]: --loglevel="${LOGLEVEL}" Dec 08 19:30:39 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:39 crc kubenswrapper[5118]: > logger="UnhandledError" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.409491 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.409625 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.412478 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:39 crc kubenswrapper[5118]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: set -o allexport Dec 08 19:30:39 crc kubenswrapper[5118]: source "/env/_master" Dec 08 19:30:39 crc kubenswrapper[5118]: set +o allexport Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 08 19:30:39 crc kubenswrapper[5118]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 19:30:39 crc kubenswrapper[5118]: --disable-webhook \ Dec 08 19:30:39 crc kubenswrapper[5118]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 08 19:30:39 crc kubenswrapper[5118]: --loglevel="${LOGLEVEL}" Dec 08 19:30:39 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:39 crc kubenswrapper[5118]: > logger="UnhandledError" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.413651 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.421793 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.424846 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.426676 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.430484 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-j4b8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87shs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-j4b8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.439156 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fp8c5" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.443427 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.444459 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6klf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.450178 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.450228 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.450241 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.450265 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.450279 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.455175 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe5ad69b-e87f-4884-afa1-9f57df6393b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c34e38756564d5facd2424d608df2958fd9546536f3c41cac83e9bfd12c30913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad051eb042181f65b6862f8f0f09916b05c9fcd8e66d8642c1f86ae78267d1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4115e50a4084c607cd9530d3ab0e2b96fee8dbc9af125d400209816dc621f62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4612f6ac3bc67751af221e89ef805795a861f7191d624aed54574e54938eb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4612f6ac3bc67751af221e89ef805795a861f7191d624aed54574e54938eb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: W1208 19:30:39.458549 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa21ead9_3381_422c_b52e_4a10a3ed1bd4.slice/crio-701217b61c4c52fcbb91a485e90cd76d7d3c93298437650d97affe73306b2d98 WatchSource:0}: Error finding container 701217b61c4c52fcbb91a485e90cd76d7d3c93298437650d97affe73306b2d98: Status 404 returned error can't find the container with id 701217b61c4c52fcbb91a485e90cd76d7d3c93298437650d97affe73306b2d98 Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.459028 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7g24j" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.460565 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:39 crc kubenswrapper[5118]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:39 crc kubenswrapper[5118]: set -uo pipefail Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 08 19:30:39 crc kubenswrapper[5118]: HOSTS_FILE="/etc/hosts" Dec 08 19:30:39 crc kubenswrapper[5118]: TEMP_FILE="/tmp/hosts.tmp" Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: # Make a temporary file with the old hosts file's attributes. Dec 08 19:30:39 crc kubenswrapper[5118]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 08 19:30:39 crc kubenswrapper[5118]: echo "Failed to preserve hosts file. Exiting." Dec 08 19:30:39 crc kubenswrapper[5118]: exit 1 Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: while true; do Dec 08 19:30:39 crc kubenswrapper[5118]: declare -A svc_ips Dec 08 19:30:39 crc kubenswrapper[5118]: for svc in "${services[@]}"; do Dec 08 19:30:39 crc kubenswrapper[5118]: # Fetch service IP from cluster dns if present. We make several tries Dec 08 19:30:39 crc kubenswrapper[5118]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 08 19:30:39 crc kubenswrapper[5118]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 08 19:30:39 crc kubenswrapper[5118]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 08 19:30:39 crc kubenswrapper[5118]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:39 crc kubenswrapper[5118]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:39 crc kubenswrapper[5118]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:39 crc kubenswrapper[5118]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 08 19:30:39 crc kubenswrapper[5118]: for i in ${!cmds[*]} Dec 08 19:30:39 crc kubenswrapper[5118]: do Dec 08 19:30:39 crc kubenswrapper[5118]: ips=($(eval "${cmds[i]}")) Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: svc_ips["${svc}"]="${ips[@]}" Dec 08 19:30:39 crc kubenswrapper[5118]: break Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: done Dec 08 19:30:39 crc kubenswrapper[5118]: done Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: # Update /etc/hosts only if we get valid service IPs Dec 08 19:30:39 crc kubenswrapper[5118]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 08 19:30:39 crc kubenswrapper[5118]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 08 19:30:39 crc kubenswrapper[5118]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 08 19:30:39 crc kubenswrapper[5118]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 08 19:30:39 crc kubenswrapper[5118]: sleep 60 & wait Dec 08 19:30:39 crc kubenswrapper[5118]: continue Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: # Append resolver entries for services Dec 08 19:30:39 crc kubenswrapper[5118]: rc=0 Dec 08 19:30:39 crc kubenswrapper[5118]: for svc in "${!svc_ips[@]}"; do Dec 08 19:30:39 crc kubenswrapper[5118]: for ip in ${svc_ips[${svc}]}; do Dec 08 19:30:39 crc kubenswrapper[5118]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 08 19:30:39 crc kubenswrapper[5118]: done Dec 08 19:30:39 crc kubenswrapper[5118]: done Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ $rc -ne 0 ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: sleep 60 & wait Dec 08 19:30:39 crc kubenswrapper[5118]: continue Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 08 19:30:39 crc kubenswrapper[5118]: # Replace /etc/hosts with our modified version if needed Dec 08 19:30:39 crc kubenswrapper[5118]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 08 19:30:39 crc kubenswrapper[5118]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: sleep 60 & wait Dec 08 19:30:39 crc kubenswrapper[5118]: unset svc_ips Dec 08 19:30:39 crc kubenswrapper[5118]: done Dec 08 19:30:39 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96pbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-fp8c5_openshift-dns(86ab4495-4d65-4b3e-9a3d-bfaad21f506a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:39 crc kubenswrapper[5118]: > logger="UnhandledError" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.460815 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r955p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-xg8tn_openshift-multus(aa21ead9-3381-422c-b52e-4a10a3ed1bd4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.461770 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-fp8c5" podUID="86ab4495-4d65-4b3e-9a3d-bfaad21f506a" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.463975 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" podUID="aa21ead9-3381-422c-b52e-4a10a3ed1bd4" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.466314 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.468902 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2d8304-0772-47e0-8c2d-ed33f18c6dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://77550c63414c65bb016bee1a9d97583c8688d5089b08f1e5d15d794eea50fea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b7f60e5988ac995422f8474669dd797d1a34954667133db76927b3e50a6ffd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://544909c828d1ef83c8368679ae033c6c9d29d544f7fd68216170667e54678dfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:10Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 19:30:10.058099 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:10.058228 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:10.058973 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1331124983/tls.crt::/tmp/serving-cert-1331124983/tls.key\\\\\\\"\\\\nI1208 19:30:10.478364 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:10.481607 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:10.481639 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:10.481678 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:10.481710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:10.486171 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:10.486196 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:10.486240 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:10.486253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:10.486259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:10.486262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:10.486266 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:10.486269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:10.488953 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4817842e9634a68433d70a28cd0b33c1b59095c64add0a62c2b6fd0ab631f1d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.469715 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fp8c5" event={"ID":"86ab4495-4d65-4b3e-9a3d-bfaad21f506a","Type":"ContainerStarted","Data":"598965c8d953632d0b2671ac7845416ece07ca6b7c352b5897300078782ef548"} Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.471156 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"78cb0138412d7a156ab119c1b8a4f1034f0c7f08267e270ed39a8406e4868693"} Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.472579 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" event={"ID":"aa21ead9-3381-422c-b52e-4a10a3ed1bd4","Type":"ContainerStarted","Data":"701217b61c4c52fcbb91a485e90cd76d7d3c93298437650d97affe73306b2d98"} Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.473831 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:39 crc kubenswrapper[5118]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:39 crc kubenswrapper[5118]: set -o allexport Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: source /etc/kubernetes/apiserver-url.env Dec 08 19:30:39 crc kubenswrapper[5118]: else Dec 08 19:30:39 crc kubenswrapper[5118]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 08 19:30:39 crc kubenswrapper[5118]: exit 1 Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 08 19:30:39 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:39 crc kubenswrapper[5118]: > logger="UnhandledError" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.474134 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:39 crc kubenswrapper[5118]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:39 crc kubenswrapper[5118]: set -uo pipefail Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 08 19:30:39 crc kubenswrapper[5118]: HOSTS_FILE="/etc/hosts" Dec 08 19:30:39 crc kubenswrapper[5118]: TEMP_FILE="/tmp/hosts.tmp" Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: # Make a temporary file with the old hosts file's attributes. Dec 08 19:30:39 crc kubenswrapper[5118]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 08 19:30:39 crc kubenswrapper[5118]: echo "Failed to preserve hosts file. Exiting." Dec 08 19:30:39 crc kubenswrapper[5118]: exit 1 Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: while true; do Dec 08 19:30:39 crc kubenswrapper[5118]: declare -A svc_ips Dec 08 19:30:39 crc kubenswrapper[5118]: for svc in "${services[@]}"; do Dec 08 19:30:39 crc kubenswrapper[5118]: # Fetch service IP from cluster dns if present. We make several tries Dec 08 19:30:39 crc kubenswrapper[5118]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 08 19:30:39 crc kubenswrapper[5118]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 08 19:30:39 crc kubenswrapper[5118]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 08 19:30:39 crc kubenswrapper[5118]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:39 crc kubenswrapper[5118]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:39 crc kubenswrapper[5118]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:39 crc kubenswrapper[5118]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 08 19:30:39 crc kubenswrapper[5118]: for i in ${!cmds[*]} Dec 08 19:30:39 crc kubenswrapper[5118]: do Dec 08 19:30:39 crc kubenswrapper[5118]: ips=($(eval "${cmds[i]}")) Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: svc_ips["${svc}"]="${ips[@]}" Dec 08 19:30:39 crc kubenswrapper[5118]: break Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: done Dec 08 19:30:39 crc kubenswrapper[5118]: done Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: # Update /etc/hosts only if we get valid service IPs Dec 08 19:30:39 crc kubenswrapper[5118]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 08 19:30:39 crc kubenswrapper[5118]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 08 19:30:39 crc kubenswrapper[5118]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 08 19:30:39 crc kubenswrapper[5118]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 08 19:30:39 crc kubenswrapper[5118]: sleep 60 & wait Dec 08 19:30:39 crc kubenswrapper[5118]: continue Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: # Append resolver entries for services Dec 08 19:30:39 crc kubenswrapper[5118]: rc=0 Dec 08 19:30:39 crc kubenswrapper[5118]: for svc in "${!svc_ips[@]}"; do Dec 08 19:30:39 crc kubenswrapper[5118]: for ip in ${svc_ips[${svc}]}; do Dec 08 19:30:39 crc kubenswrapper[5118]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 08 19:30:39 crc kubenswrapper[5118]: done Dec 08 19:30:39 crc kubenswrapper[5118]: done Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ $rc -ne 0 ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: sleep 60 & wait Dec 08 19:30:39 crc kubenswrapper[5118]: continue Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 08 19:30:39 crc kubenswrapper[5118]: # Replace /etc/hosts with our modified version if needed Dec 08 19:30:39 crc kubenswrapper[5118]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 08 19:30:39 crc kubenswrapper[5118]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: sleep 60 & wait Dec 08 19:30:39 crc kubenswrapper[5118]: unset svc_ips Dec 08 19:30:39 crc kubenswrapper[5118]: done Dec 08 19:30:39 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96pbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-fp8c5_openshift-dns(86ab4495-4d65-4b3e-9a3d-bfaad21f506a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:39 crc kubenswrapper[5118]: > logger="UnhandledError" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.474379 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r955p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-xg8tn_openshift-multus(aa21ead9-3381-422c-b52e-4a10a3ed1bd4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.474561 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"d00cf2f6f7c9a9e4c7803aac184e4072abcf8b8a3523a9c582f8fa72e7f494f0"} Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.474898 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.475257 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-fp8c5" podUID="86ab4495-4d65-4b3e-9a3d-bfaad21f506a" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.475450 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" podUID="aa21ead9-3381-422c-b52e-4a10a3ed1bd4" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.475883 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"3942bc228f881c176c12062ec015c964f86409530b4e5a7909eaac7428b4e5db"} Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.478227 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.478355 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:39 crc kubenswrapper[5118]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: set -o allexport Dec 08 19:30:39 crc kubenswrapper[5118]: source "/env/_master" Dec 08 19:30:39 crc kubenswrapper[5118]: set +o allexport Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 08 19:30:39 crc kubenswrapper[5118]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 08 19:30:39 crc kubenswrapper[5118]: ho_enable="--enable-hybrid-overlay" Dec 08 19:30:39 crc kubenswrapper[5118]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 08 19:30:39 crc kubenswrapper[5118]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 08 19:30:39 crc kubenswrapper[5118]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 08 19:30:39 crc kubenswrapper[5118]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 19:30:39 crc kubenswrapper[5118]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 08 19:30:39 crc kubenswrapper[5118]: --webhook-host=127.0.0.1 \ Dec 08 19:30:39 crc kubenswrapper[5118]: --webhook-port=9743 \ Dec 08 19:30:39 crc kubenswrapper[5118]: ${ho_enable} \ Dec 08 19:30:39 crc kubenswrapper[5118]: --enable-interconnect \ Dec 08 19:30:39 crc kubenswrapper[5118]: --disable-approver \ Dec 08 19:30:39 crc kubenswrapper[5118]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 08 19:30:39 crc kubenswrapper[5118]: --wait-for-kubernetes-api=200s \ Dec 08 19:30:39 crc kubenswrapper[5118]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 08 19:30:39 crc kubenswrapper[5118]: --loglevel="${LOGLEVEL}" Dec 08 19:30:39 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:39 crc kubenswrapper[5118]: > logger="UnhandledError" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.479555 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.480771 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:39 crc kubenswrapper[5118]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: set -o allexport Dec 08 19:30:39 crc kubenswrapper[5118]: source "/env/_master" Dec 08 19:30:39 crc kubenswrapper[5118]: set +o allexport Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 08 19:30:39 crc kubenswrapper[5118]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 19:30:39 crc kubenswrapper[5118]: --disable-webhook \ Dec 08 19:30:39 crc kubenswrapper[5118]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 08 19:30:39 crc kubenswrapper[5118]: --loglevel="${LOGLEVEL}" Dec 08 19:30:39 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:39 crc kubenswrapper[5118]: > logger="UnhandledError" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.481863 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.481942 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.482791 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-j4b8g" Dec 08 19:30:39 crc kubenswrapper[5118]: W1208 19:30:39.484847 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod688024c3_8b6c_450e_a7b2_3b3165438f4b.slice/crio-02bd04cef1adf191f84bffdf59c25758ff91c1aab688cd68e42ba99ea6c3b0ef WatchSource:0}: Error finding container 02bd04cef1adf191f84bffdf59c25758ff91c1aab688cd68e42ba99ea6c3b0ef: Status 404 returned error can't find the container with id 02bd04cef1adf191f84bffdf59c25758ff91c1aab688cd68e42ba99ea6c3b0ef Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.490759 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:39 crc kubenswrapper[5118]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 08 19:30:39 crc kubenswrapper[5118]: while [ true ]; Dec 08 19:30:39 crc kubenswrapper[5118]: do Dec 08 19:30:39 crc kubenswrapper[5118]: for f in $(ls /tmp/serviceca); do Dec 08 19:30:39 crc kubenswrapper[5118]: echo $f Dec 08 19:30:39 crc kubenswrapper[5118]: ca_file_path="/tmp/serviceca/${f}" Dec 08 19:30:39 crc kubenswrapper[5118]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 08 19:30:39 crc kubenswrapper[5118]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 08 19:30:39 crc kubenswrapper[5118]: if [ -e "${reg_dir_path}" ]; then Dec 08 19:30:39 crc kubenswrapper[5118]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 08 19:30:39 crc kubenswrapper[5118]: else Dec 08 19:30:39 crc kubenswrapper[5118]: mkdir $reg_dir_path Dec 08 19:30:39 crc kubenswrapper[5118]: cp $ca_file_path $reg_dir_path/ca.crt Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: done Dec 08 19:30:39 crc kubenswrapper[5118]: for d in $(ls /etc/docker/certs.d); do Dec 08 19:30:39 crc kubenswrapper[5118]: echo $d Dec 08 19:30:39 crc kubenswrapper[5118]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 08 19:30:39 crc kubenswrapper[5118]: reg_conf_path="/tmp/serviceca/${dp}" Dec 08 19:30:39 crc kubenswrapper[5118]: if [ ! -e "${reg_conf_path}" ]; then Dec 08 19:30:39 crc kubenswrapper[5118]: rm -rf /etc/docker/certs.d/$d Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: done Dec 08 19:30:39 crc kubenswrapper[5118]: sleep 60 & wait ${!} Dec 08 19:30:39 crc kubenswrapper[5118]: done Dec 08 19:30:39 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bkhpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-7g24j_openshift-image-registry(688024c3-8b6c-450e-a7b2-3b3165438f4b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:39 crc kubenswrapper[5118]: > logger="UnhandledError" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.491392 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7pfxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.492846 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-7g24j" podUID="688024c3-8b6c-450e-a7b2-3b3165438f4b" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.494927 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7pfxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.496050 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" Dec 08 19:30:39 crc kubenswrapper[5118]: W1208 19:30:39.496866 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e8e2a90_2e42_4cbc_b4e2_f011f5dd7742.slice/crio-b3d37901caa422951e997ce0fa277af72f70a75309b4852b6943c26fb0734419 WatchSource:0}: Error finding container b3d37901caa422951e997ce0fa277af72f70a75309b4852b6943c26fb0734419: Status 404 returned error can't find the container with id b3d37901caa422951e997ce0fa277af72f70a75309b4852b6943c26fb0734419 Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.497070 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xg8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.499099 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:39 crc kubenswrapper[5118]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 08 19:30:39 crc kubenswrapper[5118]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 08 19:30:39 crc kubenswrapper[5118]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-87shs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-j4b8g_openshift-multus(1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:39 crc kubenswrapper[5118]: > logger="UnhandledError" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.500335 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-j4b8g" podUID="1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.504850 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.507744 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.514984 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qmvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9693139-63f6-471e-ae19-744460a6b114\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c652h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c652h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qmvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.516092 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" Dec 08 19:30:39 crc kubenswrapper[5118]: W1208 19:30:39.516552 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2b3e2b7_9ad6_416d_b00a_ac9bffbdd6a6.slice/crio-65978c9b871deab25ae63164fbd953cfd3bac8ab2f630085a500440e9fba4afa WatchSource:0}: Error finding container 65978c9b871deab25ae63164fbd953cfd3bac8ab2f630085a500440e9fba4afa: Status 404 returned error can't find the container with id 65978c9b871deab25ae63164fbd953cfd3bac8ab2f630085a500440e9fba4afa Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.523003 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc62458c-133b-4909-91ab-b28870b78816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h27mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h27mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-r2hg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.524559 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:39 crc kubenswrapper[5118]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 08 19:30:39 crc kubenswrapper[5118]: apiVersion: v1 Dec 08 19:30:39 crc kubenswrapper[5118]: clusters: Dec 08 19:30:39 crc kubenswrapper[5118]: - cluster: Dec 08 19:30:39 crc kubenswrapper[5118]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 08 19:30:39 crc kubenswrapper[5118]: server: https://api-int.crc.testing:6443 Dec 08 19:30:39 crc kubenswrapper[5118]: name: default-cluster Dec 08 19:30:39 crc kubenswrapper[5118]: contexts: Dec 08 19:30:39 crc kubenswrapper[5118]: - context: Dec 08 19:30:39 crc kubenswrapper[5118]: cluster: default-cluster Dec 08 19:30:39 crc kubenswrapper[5118]: namespace: default Dec 08 19:30:39 crc kubenswrapper[5118]: user: default-auth Dec 08 19:30:39 crc kubenswrapper[5118]: name: default-context Dec 08 19:30:39 crc kubenswrapper[5118]: current-context: default-context Dec 08 19:30:39 crc kubenswrapper[5118]: kind: Config Dec 08 19:30:39 crc kubenswrapper[5118]: preferences: {} Dec 08 19:30:39 crc kubenswrapper[5118]: users: Dec 08 19:30:39 crc kubenswrapper[5118]: - name: default-auth Dec 08 19:30:39 crc kubenswrapper[5118]: user: Dec 08 19:30:39 crc kubenswrapper[5118]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 19:30:39 crc kubenswrapper[5118]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 19:30:39 crc kubenswrapper[5118]: EOF Dec 08 19:30:39 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqt29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-k6klf_openshift-ovn-kubernetes(e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:39 crc kubenswrapper[5118]: > logger="UnhandledError" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.525893 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.529946 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ede7832-cf65-41a7-bd5d-aced161a948b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5f4b44aaf2cdcfc560006f179f3c73d2f8d9096fca618c7ca57c8230fd49c15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a11a3b45932159b09f23c3607b243fe3b8c6ed6ef187c042f0cf0a14d0942ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11a3b45932159b09f23c3607b243fe3b8c6ed6ef187c042f0cf0a14d0942ec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: W1208 19:30:39.530765 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc62458c_133b_4909_91ab_b28870b78816.slice/crio-8fa61aee39a0d2068ea74bfb6b90c57ef232abf87c714faa1fe72a465724906d WatchSource:0}: Error finding container 8fa61aee39a0d2068ea74bfb6b90c57ef232abf87c714faa1fe72a465724906d: Status 404 returned error can't find the container with id 8fa61aee39a0d2068ea74bfb6b90c57ef232abf87c714faa1fe72a465724906d Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.533702 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:39 crc kubenswrapper[5118]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:39 crc kubenswrapper[5118]: set -euo pipefail Dec 08 19:30:39 crc kubenswrapper[5118]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 08 19:30:39 crc kubenswrapper[5118]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 08 19:30:39 crc kubenswrapper[5118]: # As the secret mount is optional we must wait for the files to be present. Dec 08 19:30:39 crc kubenswrapper[5118]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 08 19:30:39 crc kubenswrapper[5118]: TS=$(date +%s) Dec 08 19:30:39 crc kubenswrapper[5118]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 08 19:30:39 crc kubenswrapper[5118]: HAS_LOGGED_INFO=0 Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: log_missing_certs(){ Dec 08 19:30:39 crc kubenswrapper[5118]: CUR_TS=$(date +%s) Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 08 19:30:39 crc kubenswrapper[5118]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 08 19:30:39 crc kubenswrapper[5118]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 08 19:30:39 crc kubenswrapper[5118]: HAS_LOGGED_INFO=1 Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: } Dec 08 19:30:39 crc kubenswrapper[5118]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 08 19:30:39 crc kubenswrapper[5118]: log_missing_certs Dec 08 19:30:39 crc kubenswrapper[5118]: sleep 5 Dec 08 19:30:39 crc kubenswrapper[5118]: done Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 08 19:30:39 crc kubenswrapper[5118]: exec /usr/bin/kube-rbac-proxy \ Dec 08 19:30:39 crc kubenswrapper[5118]: --logtostderr \ Dec 08 19:30:39 crc kubenswrapper[5118]: --secure-listen-address=:9108 \ Dec 08 19:30:39 crc kubenswrapper[5118]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 08 19:30:39 crc kubenswrapper[5118]: --upstream=http://127.0.0.1:29108/ \ Dec 08 19:30:39 crc kubenswrapper[5118]: --tls-private-key-file=${TLS_PK} \ Dec 08 19:30:39 crc kubenswrapper[5118]: --tls-cert-file=${TLS_CERT} Dec 08 19:30:39 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h27mb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-r2hg2_openshift-ovn-kubernetes(fc62458c-133b-4909-91ab-b28870b78816): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:39 crc kubenswrapper[5118]: > logger="UnhandledError" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.537680 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:39 crc kubenswrapper[5118]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: set -o allexport Dec 08 19:30:39 crc kubenswrapper[5118]: source "/env/_master" Dec 08 19:30:39 crc kubenswrapper[5118]: set +o allexport Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: ovn_v4_join_subnet_opt= Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ "" != "" ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: ovn_v6_join_subnet_opt= Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ "" != "" ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: ovn_v4_transit_switch_subnet_opt= Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ "" != "" ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: ovn_v6_transit_switch_subnet_opt= Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ "" != "" ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: dns_name_resolver_enabled_flag= Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ "false" == "true" ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: # This is needed so that converting clusters from GA to TP Dec 08 19:30:39 crc kubenswrapper[5118]: # will rollout control plane pods as well Dec 08 19:30:39 crc kubenswrapper[5118]: network_segmentation_enabled_flag= Dec 08 19:30:39 crc kubenswrapper[5118]: multi_network_enabled_flag= Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ "true" == "true" ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: multi_network_enabled_flag="--enable-multi-network" Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ "true" == "true" ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ "true" != "true" ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: multi_network_enabled_flag="--enable-multi-network" Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: route_advertisements_enable_flag= Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ "false" == "true" ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: preconfigured_udn_addresses_enable_flag= Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ "false" == "true" ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: # Enable multi-network policy if configured (control-plane always full mode) Dec 08 19:30:39 crc kubenswrapper[5118]: multi_network_policy_enabled_flag= Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ "false" == "true" ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: # Enable admin network policy if configured (control-plane always full mode) Dec 08 19:30:39 crc kubenswrapper[5118]: admin_network_policy_enabled_flag= Dec 08 19:30:39 crc kubenswrapper[5118]: if [[ "true" == "true" ]]; then Dec 08 19:30:39 crc kubenswrapper[5118]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: if [ "shared" == "shared" ]; then Dec 08 19:30:39 crc kubenswrapper[5118]: gateway_mode_flags="--gateway-mode shared" Dec 08 19:30:39 crc kubenswrapper[5118]: elif [ "shared" == "local" ]; then Dec 08 19:30:39 crc kubenswrapper[5118]: gateway_mode_flags="--gateway-mode local" Dec 08 19:30:39 crc kubenswrapper[5118]: else Dec 08 19:30:39 crc kubenswrapper[5118]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 08 19:30:39 crc kubenswrapper[5118]: exit 1 Dec 08 19:30:39 crc kubenswrapper[5118]: fi Dec 08 19:30:39 crc kubenswrapper[5118]: Dec 08 19:30:39 crc kubenswrapper[5118]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 08 19:30:39 crc kubenswrapper[5118]: exec /usr/bin/ovnkube \ Dec 08 19:30:39 crc kubenswrapper[5118]: --enable-interconnect \ Dec 08 19:30:39 crc kubenswrapper[5118]: --init-cluster-manager "${K8S_NODE}" \ Dec 08 19:30:39 crc kubenswrapper[5118]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 08 19:30:39 crc kubenswrapper[5118]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 08 19:30:39 crc kubenswrapper[5118]: --metrics-bind-address "127.0.0.1:29108" \ Dec 08 19:30:39 crc kubenswrapper[5118]: --metrics-enable-pprof \ Dec 08 19:30:39 crc kubenswrapper[5118]: --metrics-enable-config-duration \ Dec 08 19:30:39 crc kubenswrapper[5118]: ${ovn_v4_join_subnet_opt} \ Dec 08 19:30:39 crc kubenswrapper[5118]: ${ovn_v6_join_subnet_opt} \ Dec 08 19:30:39 crc kubenswrapper[5118]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 08 19:30:39 crc kubenswrapper[5118]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 08 19:30:39 crc kubenswrapper[5118]: ${dns_name_resolver_enabled_flag} \ Dec 08 19:30:39 crc kubenswrapper[5118]: ${persistent_ips_enabled_flag} \ Dec 08 19:30:39 crc kubenswrapper[5118]: ${multi_network_enabled_flag} \ Dec 08 19:30:39 crc kubenswrapper[5118]: ${network_segmentation_enabled_flag} \ Dec 08 19:30:39 crc kubenswrapper[5118]: ${gateway_mode_flags} \ Dec 08 19:30:39 crc kubenswrapper[5118]: ${route_advertisements_enable_flag} \ Dec 08 19:30:39 crc kubenswrapper[5118]: ${preconfigured_udn_addresses_enable_flag} \ Dec 08 19:30:39 crc kubenswrapper[5118]: --enable-egress-ip=true \ Dec 08 19:30:39 crc kubenswrapper[5118]: --enable-egress-firewall=true \ Dec 08 19:30:39 crc kubenswrapper[5118]: --enable-egress-qos=true \ Dec 08 19:30:39 crc kubenswrapper[5118]: --enable-egress-service=true \ Dec 08 19:30:39 crc kubenswrapper[5118]: --enable-multicast \ Dec 08 19:30:39 crc kubenswrapper[5118]: --enable-multi-external-gateway=true \ Dec 08 19:30:39 crc kubenswrapper[5118]: ${multi_network_policy_enabled_flag} \ Dec 08 19:30:39 crc kubenswrapper[5118]: ${admin_network_policy_enabled_flag} Dec 08 19:30:39 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h27mb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-r2hg2_openshift-ovn-kubernetes(fc62458c-133b-4909-91ab-b28870b78816): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:39 crc kubenswrapper[5118]: > logger="UnhandledError" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.538825 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" podUID="fc62458c-133b-4909-91ab-b28870b78816" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.539111 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732d7dd3-9bf4-4e4c-9583-7cc2d66a273c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://f96b5c895f4872869c8afd92cd4a2f5eb829c355a2e35dc83c6741c426f42ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ed5e8a16f16345b28c7907efe04e4b3856cbade55bdb538fc7f3790a7e71d583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://027e114c2682c800b54ad673ffaf9a3e6d2e4b1b44a3395f348dfc94c54ddc30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://638dff118a255984c06222c27a23f2f72f75f5f45043827e4866fd6e5ad9efa6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.547786 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.552732 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.552792 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.552806 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.552826 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.552841 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.570152 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.611582 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-j4b8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87shs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-j4b8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.656103 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.656174 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.656192 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.656220 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.656241 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.656391 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6klf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.692208 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe5ad69b-e87f-4884-afa1-9f57df6393b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c34e38756564d5facd2424d608df2958fd9546536f3c41cac83e9bfd12c30913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad051eb042181f65b6862f8f0f09916b05c9fcd8e66d8642c1f86ae78267d1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4115e50a4084c607cd9530d3ab0e2b96fee8dbc9af125d400209816dc621f62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4612f6ac3bc67751af221e89ef805795a861f7191d624aed54574e54938eb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4612f6ac3bc67751af221e89ef805795a861f7191d624aed54574e54938eb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.699769 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.699907 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:30:40.6998842 +0000 UTC m=+92.992729667 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.700044 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.700085 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.700126 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.700160 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.700232 5118 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.700314 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:40.700294572 +0000 UTC m=+92.993140069 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.700342 5118 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.700469 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:40.700441796 +0000 UTC m=+92.993287293 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.700469 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.700560 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.700572 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.700585 5118 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.700592 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.700609 5118 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.700638 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:40.700624171 +0000 UTC m=+92.993469678 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.700662 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:40.700651231 +0000 UTC m=+92.993496728 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.733828 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2d8304-0772-47e0-8c2d-ed33f18c6dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://77550c63414c65bb016bee1a9d97583c8688d5089b08f1e5d15d794eea50fea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b7f60e5988ac995422f8474669dd797d1a34954667133db76927b3e50a6ffd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://544909c828d1ef83c8368679ae033c6c9d29d544f7fd68216170667e54678dfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:10Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 19:30:10.058099 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:10.058228 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:10.058973 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1331124983/tls.crt::/tmp/serving-cert-1331124983/tls.key\\\\\\\"\\\\nI1208 19:30:10.478364 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:10.481607 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:10.481639 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:10.481678 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:10.481710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:10.486171 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:10.486196 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:10.486240 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:10.486253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:10.486259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:10.486262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:10.486266 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:10.486269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:10.488953 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4817842e9634a68433d70a28cd0b33c1b59095c64add0a62c2b6fd0ab631f1d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.758364 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.758443 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.758465 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.758492 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.758510 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.772619 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.801879 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs\") pod \"network-metrics-daemon-qmvkf\" (UID: \"b9693139-63f6-471e-ae19-744460a6b114\") " pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.802233 5118 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:39 crc kubenswrapper[5118]: E1208 19:30:39.802446 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs podName:b9693139-63f6-471e-ae19-744460a6b114 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:40.802422161 +0000 UTC m=+93.095267638 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs") pod "network-metrics-daemon-qmvkf" (UID: "b9693139-63f6-471e-ae19-744460a6b114") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.812669 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xg8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.848946 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-fp8c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ab4495-4d65-4b3e-9a3d-bfaad21f506a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96pbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fp8c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.860726 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.860783 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.860797 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.860813 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.860824 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.887531 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7g24j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"688024c3-8b6c-450e-a7b2-3b3165438f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkhpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7g24j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.933446 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0052f7cb-2eab-42e7-8f98-b1544811d9c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7pfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7pfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-twnt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.963089 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.963137 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.963149 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.963167 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.963179 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5118]: I1208 19:30:39.978639 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56a13789-0247-4d3a-9b22-6f0bc1a77b2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://16184f6f1982588e4aacf024dca32892985c428914dfab58baf03a3e0a296cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://87524e06886d743364c19bf1d1cbd1e8c7e9be19424206ec6b49d02a770729ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9502f9a9fc4385a11375d1454dc563a79e935d00cf846d1cba59363a82cdebf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2e23a47c43a333a7dbc87ffbd2d9968813080ef443b1706e946996bd22bd6785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://c054019c6129f78ea8bc4f9abd8a9cb3f052c4b135ce01e75b822c97ba27de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://279ee597b69db21a603f5b68418bf5407320d57b51c75b7dc9f56776ec109ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279ee597b69db21a603f5b68418bf5407320d57b51c75b7dc9f56776ec109ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://efd7bb6b3cbab052623ca5755b789764a6909190a393b9d46358d177a45dee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd7bb6b3cbab052623ca5755b789764a6909190a393b9d46358d177a45dee74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece9b7fc3121bc4f834fa2e993f2066c32baa9563e15facc11ca12149390fd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece9b7fc3121bc4f834fa2e993f2066c32baa9563e15facc11ca12149390fd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.016210 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.052316 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.065351 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.065400 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.065412 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.065429 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.065442 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:40Z","lastTransitionTime":"2025-12-08T19:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.100636 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.101493 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.103243 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.104631 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.106111 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.108092 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.109012 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.110463 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.111157 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.112574 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.113470 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.115085 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.116257 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.117346 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.117806 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.118836 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.119556 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.121355 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.122304 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.124289 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.125661 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.129662 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.130745 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.131636 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.133023 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.133912 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.137629 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.138350 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.140802 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.141311 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.142333 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.145234 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.149038 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.151372 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.152922 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.153875 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.155457 5118 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.155581 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.158138 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.159764 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.160858 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.162087 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.162574 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.163922 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.164778 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.165267 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.166417 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.167365 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.167618 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.167646 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.167654 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.167667 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.167678 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:40Z","lastTransitionTime":"2025-12-08T19:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.168627 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.169339 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.170417 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.171154 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.172314 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.173196 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.174792 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.175816 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.176599 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.177344 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.270946 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.270994 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.271005 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.271019 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.271028 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:40Z","lastTransitionTime":"2025-12-08T19:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.373584 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.373654 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.373674 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.373716 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.373730 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:40Z","lastTransitionTime":"2025-12-08T19:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.476501 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.476557 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.476575 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.476598 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.476627 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:40Z","lastTransitionTime":"2025-12-08T19:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.479763 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" event={"ID":"fc62458c-133b-4909-91ab-b28870b78816","Type":"ContainerStarted","Data":"8fa61aee39a0d2068ea74bfb6b90c57ef232abf87c714faa1fe72a465724906d"} Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.481387 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7g24j" event={"ID":"688024c3-8b6c-450e-a7b2-3b3165438f4b","Type":"ContainerStarted","Data":"02bd04cef1adf191f84bffdf59c25758ff91c1aab688cd68e42ba99ea6c3b0ef"} Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.483364 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:40 crc kubenswrapper[5118]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:40 crc kubenswrapper[5118]: set -euo pipefail Dec 08 19:30:40 crc kubenswrapper[5118]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 08 19:30:40 crc kubenswrapper[5118]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 08 19:30:40 crc kubenswrapper[5118]: # As the secret mount is optional we must wait for the files to be present. Dec 08 19:30:40 crc kubenswrapper[5118]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 08 19:30:40 crc kubenswrapper[5118]: TS=$(date +%s) Dec 08 19:30:40 crc kubenswrapper[5118]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 08 19:30:40 crc kubenswrapper[5118]: HAS_LOGGED_INFO=0 Dec 08 19:30:40 crc kubenswrapper[5118]: Dec 08 19:30:40 crc kubenswrapper[5118]: log_missing_certs(){ Dec 08 19:30:40 crc kubenswrapper[5118]: CUR_TS=$(date +%s) Dec 08 19:30:40 crc kubenswrapper[5118]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 08 19:30:40 crc kubenswrapper[5118]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 08 19:30:40 crc kubenswrapper[5118]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 08 19:30:40 crc kubenswrapper[5118]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 08 19:30:40 crc kubenswrapper[5118]: HAS_LOGGED_INFO=1 Dec 08 19:30:40 crc kubenswrapper[5118]: fi Dec 08 19:30:40 crc kubenswrapper[5118]: } Dec 08 19:30:40 crc kubenswrapper[5118]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 08 19:30:40 crc kubenswrapper[5118]: log_missing_certs Dec 08 19:30:40 crc kubenswrapper[5118]: sleep 5 Dec 08 19:30:40 crc kubenswrapper[5118]: done Dec 08 19:30:40 crc kubenswrapper[5118]: Dec 08 19:30:40 crc kubenswrapper[5118]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 08 19:30:40 crc kubenswrapper[5118]: exec /usr/bin/kube-rbac-proxy \ Dec 08 19:30:40 crc kubenswrapper[5118]: --logtostderr \ Dec 08 19:30:40 crc kubenswrapper[5118]: --secure-listen-address=:9108 \ Dec 08 19:30:40 crc kubenswrapper[5118]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 08 19:30:40 crc kubenswrapper[5118]: --upstream=http://127.0.0.1:29108/ \ Dec 08 19:30:40 crc kubenswrapper[5118]: --tls-private-key-file=${TLS_PK} \ Dec 08 19:30:40 crc kubenswrapper[5118]: --tls-cert-file=${TLS_CERT} Dec 08 19:30:40 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h27mb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-r2hg2_openshift-ovn-kubernetes(fc62458c-133b-4909-91ab-b28870b78816): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:40 crc kubenswrapper[5118]: > logger="UnhandledError" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.486829 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" event={"ID":"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6","Type":"ContainerStarted","Data":"65978c9b871deab25ae63164fbd953cfd3bac8ab2f630085a500440e9fba4afa"} Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.487098 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:40 crc kubenswrapper[5118]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 08 19:30:40 crc kubenswrapper[5118]: while [ true ]; Dec 08 19:30:40 crc kubenswrapper[5118]: do Dec 08 19:30:40 crc kubenswrapper[5118]: for f in $(ls /tmp/serviceca); do Dec 08 19:30:40 crc kubenswrapper[5118]: echo $f Dec 08 19:30:40 crc kubenswrapper[5118]: ca_file_path="/tmp/serviceca/${f}" Dec 08 19:30:40 crc kubenswrapper[5118]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 08 19:30:40 crc kubenswrapper[5118]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 08 19:30:40 crc kubenswrapper[5118]: if [ -e "${reg_dir_path}" ]; then Dec 08 19:30:40 crc kubenswrapper[5118]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 08 19:30:40 crc kubenswrapper[5118]: else Dec 08 19:30:40 crc kubenswrapper[5118]: mkdir $reg_dir_path Dec 08 19:30:40 crc kubenswrapper[5118]: cp $ca_file_path $reg_dir_path/ca.crt Dec 08 19:30:40 crc kubenswrapper[5118]: fi Dec 08 19:30:40 crc kubenswrapper[5118]: done Dec 08 19:30:40 crc kubenswrapper[5118]: for d in $(ls /etc/docker/certs.d); do Dec 08 19:30:40 crc kubenswrapper[5118]: echo $d Dec 08 19:30:40 crc kubenswrapper[5118]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 08 19:30:40 crc kubenswrapper[5118]: reg_conf_path="/tmp/serviceca/${dp}" Dec 08 19:30:40 crc kubenswrapper[5118]: if [ ! -e "${reg_conf_path}" ]; then Dec 08 19:30:40 crc kubenswrapper[5118]: rm -rf /etc/docker/certs.d/$d Dec 08 19:30:40 crc kubenswrapper[5118]: fi Dec 08 19:30:40 crc kubenswrapper[5118]: done Dec 08 19:30:40 crc kubenswrapper[5118]: sleep 60 & wait ${!} Dec 08 19:30:40 crc kubenswrapper[5118]: done Dec 08 19:30:40 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bkhpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-7g24j_openshift-image-registry(688024c3-8b6c-450e-a7b2-3b3165438f4b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:40 crc kubenswrapper[5118]: > logger="UnhandledError" Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.488477 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:40 crc kubenswrapper[5118]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:40 crc kubenswrapper[5118]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:40 crc kubenswrapper[5118]: set -o allexport Dec 08 19:30:40 crc kubenswrapper[5118]: source "/env/_master" Dec 08 19:30:40 crc kubenswrapper[5118]: set +o allexport Dec 08 19:30:40 crc kubenswrapper[5118]: fi Dec 08 19:30:40 crc kubenswrapper[5118]: Dec 08 19:30:40 crc kubenswrapper[5118]: ovn_v4_join_subnet_opt= Dec 08 19:30:40 crc kubenswrapper[5118]: if [[ "" != "" ]]; then Dec 08 19:30:40 crc kubenswrapper[5118]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 08 19:30:40 crc kubenswrapper[5118]: fi Dec 08 19:30:40 crc kubenswrapper[5118]: ovn_v6_join_subnet_opt= Dec 08 19:30:40 crc kubenswrapper[5118]: if [[ "" != "" ]]; then Dec 08 19:30:40 crc kubenswrapper[5118]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 08 19:30:40 crc kubenswrapper[5118]: fi Dec 08 19:30:40 crc kubenswrapper[5118]: Dec 08 19:30:40 crc kubenswrapper[5118]: ovn_v4_transit_switch_subnet_opt= Dec 08 19:30:40 crc kubenswrapper[5118]: if [[ "" != "" ]]; then Dec 08 19:30:40 crc kubenswrapper[5118]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 08 19:30:40 crc kubenswrapper[5118]: fi Dec 08 19:30:40 crc kubenswrapper[5118]: ovn_v6_transit_switch_subnet_opt= Dec 08 19:30:40 crc kubenswrapper[5118]: if [[ "" != "" ]]; then Dec 08 19:30:40 crc kubenswrapper[5118]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 08 19:30:40 crc kubenswrapper[5118]: fi Dec 08 19:30:40 crc kubenswrapper[5118]: Dec 08 19:30:40 crc kubenswrapper[5118]: dns_name_resolver_enabled_flag= Dec 08 19:30:40 crc kubenswrapper[5118]: if [[ "false" == "true" ]]; then Dec 08 19:30:40 crc kubenswrapper[5118]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 08 19:30:40 crc kubenswrapper[5118]: fi Dec 08 19:30:40 crc kubenswrapper[5118]: Dec 08 19:30:40 crc kubenswrapper[5118]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 08 19:30:40 crc kubenswrapper[5118]: Dec 08 19:30:40 crc kubenswrapper[5118]: # This is needed so that converting clusters from GA to TP Dec 08 19:30:40 crc kubenswrapper[5118]: # will rollout control plane pods as well Dec 08 19:30:40 crc kubenswrapper[5118]: network_segmentation_enabled_flag= Dec 08 19:30:40 crc kubenswrapper[5118]: multi_network_enabled_flag= Dec 08 19:30:40 crc kubenswrapper[5118]: if [[ "true" == "true" ]]; then Dec 08 19:30:40 crc kubenswrapper[5118]: multi_network_enabled_flag="--enable-multi-network" Dec 08 19:30:40 crc kubenswrapper[5118]: fi Dec 08 19:30:40 crc kubenswrapper[5118]: if [[ "true" == "true" ]]; then Dec 08 19:30:40 crc kubenswrapper[5118]: if [[ "true" != "true" ]]; then Dec 08 19:30:40 crc kubenswrapper[5118]: multi_network_enabled_flag="--enable-multi-network" Dec 08 19:30:40 crc kubenswrapper[5118]: fi Dec 08 19:30:40 crc kubenswrapper[5118]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 08 19:30:40 crc kubenswrapper[5118]: fi Dec 08 19:30:40 crc kubenswrapper[5118]: Dec 08 19:30:40 crc kubenswrapper[5118]: route_advertisements_enable_flag= Dec 08 19:30:40 crc kubenswrapper[5118]: if [[ "false" == "true" ]]; then Dec 08 19:30:40 crc kubenswrapper[5118]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 08 19:30:40 crc kubenswrapper[5118]: fi Dec 08 19:30:40 crc kubenswrapper[5118]: Dec 08 19:30:40 crc kubenswrapper[5118]: preconfigured_udn_addresses_enable_flag= Dec 08 19:30:40 crc kubenswrapper[5118]: if [[ "false" == "true" ]]; then Dec 08 19:30:40 crc kubenswrapper[5118]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 08 19:30:40 crc kubenswrapper[5118]: fi Dec 08 19:30:40 crc kubenswrapper[5118]: Dec 08 19:30:40 crc kubenswrapper[5118]: # Enable multi-network policy if configured (control-plane always full mode) Dec 08 19:30:40 crc kubenswrapper[5118]: multi_network_policy_enabled_flag= Dec 08 19:30:40 crc kubenswrapper[5118]: if [[ "false" == "true" ]]; then Dec 08 19:30:40 crc kubenswrapper[5118]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 08 19:30:40 crc kubenswrapper[5118]: fi Dec 08 19:30:40 crc kubenswrapper[5118]: Dec 08 19:30:40 crc kubenswrapper[5118]: # Enable admin network policy if configured (control-plane always full mode) Dec 08 19:30:40 crc kubenswrapper[5118]: admin_network_policy_enabled_flag= Dec 08 19:30:40 crc kubenswrapper[5118]: if [[ "true" == "true" ]]; then Dec 08 19:30:40 crc kubenswrapper[5118]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 08 19:30:40 crc kubenswrapper[5118]: fi Dec 08 19:30:40 crc kubenswrapper[5118]: Dec 08 19:30:40 crc kubenswrapper[5118]: if [ "shared" == "shared" ]; then Dec 08 19:30:40 crc kubenswrapper[5118]: gateway_mode_flags="--gateway-mode shared" Dec 08 19:30:40 crc kubenswrapper[5118]: elif [ "shared" == "local" ]; then Dec 08 19:30:40 crc kubenswrapper[5118]: gateway_mode_flags="--gateway-mode local" Dec 08 19:30:40 crc kubenswrapper[5118]: else Dec 08 19:30:40 crc kubenswrapper[5118]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 08 19:30:40 crc kubenswrapper[5118]: exit 1 Dec 08 19:30:40 crc kubenswrapper[5118]: fi Dec 08 19:30:40 crc kubenswrapper[5118]: Dec 08 19:30:40 crc kubenswrapper[5118]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 08 19:30:40 crc kubenswrapper[5118]: exec /usr/bin/ovnkube \ Dec 08 19:30:40 crc kubenswrapper[5118]: --enable-interconnect \ Dec 08 19:30:40 crc kubenswrapper[5118]: --init-cluster-manager "${K8S_NODE}" \ Dec 08 19:30:40 crc kubenswrapper[5118]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 08 19:30:40 crc kubenswrapper[5118]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 08 19:30:40 crc kubenswrapper[5118]: --metrics-bind-address "127.0.0.1:29108" \ Dec 08 19:30:40 crc kubenswrapper[5118]: --metrics-enable-pprof \ Dec 08 19:30:40 crc kubenswrapper[5118]: --metrics-enable-config-duration \ Dec 08 19:30:40 crc kubenswrapper[5118]: ${ovn_v4_join_subnet_opt} \ Dec 08 19:30:40 crc kubenswrapper[5118]: ${ovn_v6_join_subnet_opt} \ Dec 08 19:30:40 crc kubenswrapper[5118]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 08 19:30:40 crc kubenswrapper[5118]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 08 19:30:40 crc kubenswrapper[5118]: ${dns_name_resolver_enabled_flag} \ Dec 08 19:30:40 crc kubenswrapper[5118]: ${persistent_ips_enabled_flag} \ Dec 08 19:30:40 crc kubenswrapper[5118]: ${multi_network_enabled_flag} \ Dec 08 19:30:40 crc kubenswrapper[5118]: ${network_segmentation_enabled_flag} \ Dec 08 19:30:40 crc kubenswrapper[5118]: ${gateway_mode_flags} \ Dec 08 19:30:40 crc kubenswrapper[5118]: ${route_advertisements_enable_flag} \ Dec 08 19:30:40 crc kubenswrapper[5118]: ${preconfigured_udn_addresses_enable_flag} \ Dec 08 19:30:40 crc kubenswrapper[5118]: --enable-egress-ip=true \ Dec 08 19:30:40 crc kubenswrapper[5118]: --enable-egress-firewall=true \ Dec 08 19:30:40 crc kubenswrapper[5118]: --enable-egress-qos=true \ Dec 08 19:30:40 crc kubenswrapper[5118]: --enable-egress-service=true \ Dec 08 19:30:40 crc kubenswrapper[5118]: --enable-multicast \ Dec 08 19:30:40 crc kubenswrapper[5118]: --enable-multi-external-gateway=true \ Dec 08 19:30:40 crc kubenswrapper[5118]: ${multi_network_policy_enabled_flag} \ Dec 08 19:30:40 crc kubenswrapper[5118]: ${admin_network_policy_enabled_flag} Dec 08 19:30:40 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h27mb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-r2hg2_openshift-ovn-kubernetes(fc62458c-133b-4909-91ab-b28870b78816): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:40 crc kubenswrapper[5118]: > logger="UnhandledError" Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.489451 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-7g24j" podUID="688024c3-8b6c-450e-a7b2-3b3165438f4b" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.489674 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-j4b8g" event={"ID":"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742","Type":"ContainerStarted","Data":"b3d37901caa422951e997ce0fa277af72f70a75309b4852b6943c26fb0734419"} Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.490488 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" podUID="fc62458c-133b-4909-91ab-b28870b78816" Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.490757 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:40 crc kubenswrapper[5118]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 08 19:30:40 crc kubenswrapper[5118]: apiVersion: v1 Dec 08 19:30:40 crc kubenswrapper[5118]: clusters: Dec 08 19:30:40 crc kubenswrapper[5118]: - cluster: Dec 08 19:30:40 crc kubenswrapper[5118]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 08 19:30:40 crc kubenswrapper[5118]: server: https://api-int.crc.testing:6443 Dec 08 19:30:40 crc kubenswrapper[5118]: name: default-cluster Dec 08 19:30:40 crc kubenswrapper[5118]: contexts: Dec 08 19:30:40 crc kubenswrapper[5118]: - context: Dec 08 19:30:40 crc kubenswrapper[5118]: cluster: default-cluster Dec 08 19:30:40 crc kubenswrapper[5118]: namespace: default Dec 08 19:30:40 crc kubenswrapper[5118]: user: default-auth Dec 08 19:30:40 crc kubenswrapper[5118]: name: default-context Dec 08 19:30:40 crc kubenswrapper[5118]: current-context: default-context Dec 08 19:30:40 crc kubenswrapper[5118]: kind: Config Dec 08 19:30:40 crc kubenswrapper[5118]: preferences: {} Dec 08 19:30:40 crc kubenswrapper[5118]: users: Dec 08 19:30:40 crc kubenswrapper[5118]: - name: default-auth Dec 08 19:30:40 crc kubenswrapper[5118]: user: Dec 08 19:30:40 crc kubenswrapper[5118]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 19:30:40 crc kubenswrapper[5118]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 19:30:40 crc kubenswrapper[5118]: EOF Dec 08 19:30:40 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqt29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-k6klf_openshift-ovn-kubernetes(e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:40 crc kubenswrapper[5118]: > logger="UnhandledError" Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.491797 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:40 crc kubenswrapper[5118]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 08 19:30:40 crc kubenswrapper[5118]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 08 19:30:40 crc kubenswrapper[5118]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-87shs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-j4b8g_openshift-multus(1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:40 crc kubenswrapper[5118]: > logger="UnhandledError" Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.491839 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.492001 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" event={"ID":"0052f7cb-2eab-42e7-8f98-b1544811d9c3","Type":"ContainerStarted","Data":"9edf6a0b9508f28dec2e087a795220780e3de9f63ef1ec557fabb03f0661b881"} Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.493118 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-j4b8g" podUID="1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742" Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.494137 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7pfxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.497054 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7pfxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.498358 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.499524 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.516025 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xg8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.527194 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-fp8c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ab4495-4d65-4b3e-9a3d-bfaad21f506a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96pbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fp8c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.535649 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7g24j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"688024c3-8b6c-450e-a7b2-3b3165438f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkhpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7g24j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.544444 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0052f7cb-2eab-42e7-8f98-b1544811d9c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7pfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7pfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-twnt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.565012 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56a13789-0247-4d3a-9b22-6f0bc1a77b2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://16184f6f1982588e4aacf024dca32892985c428914dfab58baf03a3e0a296cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://87524e06886d743364c19bf1d1cbd1e8c7e9be19424206ec6b49d02a770729ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9502f9a9fc4385a11375d1454dc563a79e935d00cf846d1cba59363a82cdebf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2e23a47c43a333a7dbc87ffbd2d9968813080ef443b1706e946996bd22bd6785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://c054019c6129f78ea8bc4f9abd8a9cb3f052c4b135ce01e75b822c97ba27de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://279ee597b69db21a603f5b68418bf5407320d57b51c75b7dc9f56776ec109ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279ee597b69db21a603f5b68418bf5407320d57b51c75b7dc9f56776ec109ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://efd7bb6b3cbab052623ca5755b789764a6909190a393b9d46358d177a45dee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd7bb6b3cbab052623ca5755b789764a6909190a393b9d46358d177a45dee74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece9b7fc3121bc4f834fa2e993f2066c32baa9563e15facc11ca12149390fd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece9b7fc3121bc4f834fa2e993f2066c32baa9563e15facc11ca12149390fd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.574935 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.578321 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.578452 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.578528 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.578592 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.578663 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:40Z","lastTransitionTime":"2025-12-08T19:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.591015 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.601890 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.610656 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qmvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9693139-63f6-471e-ae19-744460a6b114\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c652h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c652h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qmvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.621376 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc62458c-133b-4909-91ab-b28870b78816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h27mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h27mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-r2hg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.633750 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ede7832-cf65-41a7-bd5d-aced161a948b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5f4b44aaf2cdcfc560006f179f3c73d2f8d9096fca618c7ca57c8230fd49c15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a11a3b45932159b09f23c3607b243fe3b8c6ed6ef187c042f0cf0a14d0942ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11a3b45932159b09f23c3607b243fe3b8c6ed6ef187c042f0cf0a14d0942ec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.643012 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732d7dd3-9bf4-4e4c-9583-7cc2d66a273c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://f96b5c895f4872869c8afd92cd4a2f5eb829c355a2e35dc83c6741c426f42ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ed5e8a16f16345b28c7907efe04e4b3856cbade55bdb538fc7f3790a7e71d583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://027e114c2682c800b54ad673ffaf9a3e6d2e4b1b44a3395f348dfc94c54ddc30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://638dff118a255984c06222c27a23f2f72f75f5f45043827e4866fd6e5ad9efa6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.650541 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.658535 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.681081 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.681134 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.681148 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.681167 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.681179 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:40Z","lastTransitionTime":"2025-12-08T19:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.690205 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-j4b8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87shs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-j4b8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.715510 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.715631 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.715716 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:30:42.715651034 +0000 UTC m=+95.008496531 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.715799 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.715824 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.715837 5118 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.715850 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.715894 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:42.71587557 +0000 UTC m=+95.008721047 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.715920 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.715947 5118 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.715969 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.716005 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:42.715991353 +0000 UTC m=+95.008836850 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.716056 5118 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.716063 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.716082 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.716091 5118 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.716121 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:42.716106316 +0000 UTC m=+95.008951853 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.716139 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:42.716131777 +0000 UTC m=+95.008977334 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.733060 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6klf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.769731 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe5ad69b-e87f-4884-afa1-9f57df6393b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c34e38756564d5facd2424d608df2958fd9546536f3c41cac83e9bfd12c30913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad051eb042181f65b6862f8f0f09916b05c9fcd8e66d8642c1f86ae78267d1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4115e50a4084c607cd9530d3ab0e2b96fee8dbc9af125d400209816dc621f62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4612f6ac3bc67751af221e89ef805795a861f7191d624aed54574e54938eb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4612f6ac3bc67751af221e89ef805795a861f7191d624aed54574e54938eb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.783746 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.784017 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.784088 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.784149 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.784203 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:40Z","lastTransitionTime":"2025-12-08T19:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.817296 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs\") pod \"network-metrics-daemon-qmvkf\" (UID: \"b9693139-63f6-471e-ae19-744460a6b114\") " pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.817476 5118 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:40 crc kubenswrapper[5118]: E1208 19:30:40.817532 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs podName:b9693139-63f6-471e-ae19-744460a6b114 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:42.817515666 +0000 UTC m=+95.110361123 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs") pod "network-metrics-daemon-qmvkf" (UID: "b9693139-63f6-471e-ae19-744460a6b114") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.817389 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2d8304-0772-47e0-8c2d-ed33f18c6dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://77550c63414c65bb016bee1a9d97583c8688d5089b08f1e5d15d794eea50fea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b7f60e5988ac995422f8474669dd797d1a34954667133db76927b3e50a6ffd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://544909c828d1ef83c8368679ae033c6c9d29d544f7fd68216170667e54678dfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:10Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 19:30:10.058099 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:10.058228 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:10.058973 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1331124983/tls.crt::/tmp/serving-cert-1331124983/tls.key\\\\\\\"\\\\nI1208 19:30:10.478364 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:10.481607 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:10.481639 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:10.481678 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:10.481710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:10.486171 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:10.486196 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:10.486240 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:10.486253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:10.486259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:10.486262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:10.486266 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:10.486269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:10.488953 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4817842e9634a68433d70a28cd0b33c1b59095c64add0a62c2b6fd0ab631f1d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.849252 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.886967 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.887036 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.887053 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.887073 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.887087 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:40Z","lastTransitionTime":"2025-12-08T19:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.897170 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-j4b8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87shs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-j4b8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.943866 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6klf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.974791 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe5ad69b-e87f-4884-afa1-9f57df6393b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c34e38756564d5facd2424d608df2958fd9546536f3c41cac83e9bfd12c30913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad051eb042181f65b6862f8f0f09916b05c9fcd8e66d8642c1f86ae78267d1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4115e50a4084c607cd9530d3ab0e2b96fee8dbc9af125d400209816dc621f62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4612f6ac3bc67751af221e89ef805795a861f7191d624aed54574e54938eb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4612f6ac3bc67751af221e89ef805795a861f7191d624aed54574e54938eb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.988988 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.989027 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.989035 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.989049 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:40 crc kubenswrapper[5118]: I1208 19:30:40.989059 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:40Z","lastTransitionTime":"2025-12-08T19:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.011939 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2d8304-0772-47e0-8c2d-ed33f18c6dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://77550c63414c65bb016bee1a9d97583c8688d5089b08f1e5d15d794eea50fea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b7f60e5988ac995422f8474669dd797d1a34954667133db76927b3e50a6ffd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://544909c828d1ef83c8368679ae033c6c9d29d544f7fd68216170667e54678dfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:10Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 19:30:10.058099 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:10.058228 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:10.058973 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1331124983/tls.crt::/tmp/serving-cert-1331124983/tls.key\\\\\\\"\\\\nI1208 19:30:10.478364 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:10.481607 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:10.481639 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:10.481678 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:10.481710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:10.486171 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:10.486196 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:10.486240 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:10.486253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:10.486259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:10.486262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:10.486266 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:10.486269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:10.488953 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4817842e9634a68433d70a28cd0b33c1b59095c64add0a62c2b6fd0ab631f1d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.050261 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.091649 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.092065 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.092290 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.092516 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.092667 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:41Z","lastTransitionTime":"2025-12-08T19:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.096088 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.096151 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.096141 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xg8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.096342 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:41 crc kubenswrapper[5118]: E1208 19:30:41.096342 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:41 crc kubenswrapper[5118]: E1208 19:30:41.096486 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:41 crc kubenswrapper[5118]: E1208 19:30:41.096592 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.096648 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:41 crc kubenswrapper[5118]: E1208 19:30:41.096754 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qmvkf" podUID="b9693139-63f6-471e-ae19-744460a6b114" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.130147 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-fp8c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ab4495-4d65-4b3e-9a3d-bfaad21f506a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96pbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fp8c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.170044 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7g24j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"688024c3-8b6c-450e-a7b2-3b3165438f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkhpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7g24j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.195634 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.195743 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.195764 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.195790 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.195809 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:41Z","lastTransitionTime":"2025-12-08T19:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.211943 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0052f7cb-2eab-42e7-8f98-b1544811d9c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7pfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7pfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-twnt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.268278 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56a13789-0247-4d3a-9b22-6f0bc1a77b2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://16184f6f1982588e4aacf024dca32892985c428914dfab58baf03a3e0a296cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://87524e06886d743364c19bf1d1cbd1e8c7e9be19424206ec6b49d02a770729ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9502f9a9fc4385a11375d1454dc563a79e935d00cf846d1cba59363a82cdebf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2e23a47c43a333a7dbc87ffbd2d9968813080ef443b1706e946996bd22bd6785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://c054019c6129f78ea8bc4f9abd8a9cb3f052c4b135ce01e75b822c97ba27de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://279ee597b69db21a603f5b68418bf5407320d57b51c75b7dc9f56776ec109ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279ee597b69db21a603f5b68418bf5407320d57b51c75b7dc9f56776ec109ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://efd7bb6b3cbab052623ca5755b789764a6909190a393b9d46358d177a45dee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd7bb6b3cbab052623ca5755b789764a6909190a393b9d46358d177a45dee74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece9b7fc3121bc4f834fa2e993f2066c32baa9563e15facc11ca12149390fd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece9b7fc3121bc4f834fa2e993f2066c32baa9563e15facc11ca12149390fd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.292908 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.298711 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.298773 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.298788 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.298819 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.298834 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:41Z","lastTransitionTime":"2025-12-08T19:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.332725 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.374579 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.401862 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.401929 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.401942 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.401963 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.401979 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:41Z","lastTransitionTime":"2025-12-08T19:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.410616 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qmvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9693139-63f6-471e-ae19-744460a6b114\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c652h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c652h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qmvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.452470 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc62458c-133b-4909-91ab-b28870b78816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h27mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h27mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-r2hg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.491881 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ede7832-cf65-41a7-bd5d-aced161a948b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5f4b44aaf2cdcfc560006f179f3c73d2f8d9096fca618c7ca57c8230fd49c15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a11a3b45932159b09f23c3607b243fe3b8c6ed6ef187c042f0cf0a14d0942ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11a3b45932159b09f23c3607b243fe3b8c6ed6ef187c042f0cf0a14d0942ec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.503814 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.503891 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.503917 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.503942 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.503957 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:41Z","lastTransitionTime":"2025-12-08T19:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.535224 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732d7dd3-9bf4-4e4c-9583-7cc2d66a273c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://f96b5c895f4872869c8afd92cd4a2f5eb829c355a2e35dc83c6741c426f42ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ed5e8a16f16345b28c7907efe04e4b3856cbade55bdb538fc7f3790a7e71d583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://027e114c2682c800b54ad673ffaf9a3e6d2e4b1b44a3395f348dfc94c54ddc30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://638dff118a255984c06222c27a23f2f72f75f5f45043827e4866fd6e5ad9efa6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.574006 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.606905 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.606968 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.606983 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.607004 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.607016 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:41Z","lastTransitionTime":"2025-12-08T19:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.709362 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.709416 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.709427 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.709447 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.709460 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:41Z","lastTransitionTime":"2025-12-08T19:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.813338 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.813417 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.813434 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.813459 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.813477 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:41Z","lastTransitionTime":"2025-12-08T19:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.916008 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.916088 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.916108 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.916135 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:41 crc kubenswrapper[5118]: I1208 19:30:41.916157 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:41Z","lastTransitionTime":"2025-12-08T19:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.019007 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.019076 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.019095 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.019122 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.019142 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.121917 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.121998 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.122025 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.122053 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.122073 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.225752 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.225825 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.225840 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.225862 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.225877 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.328194 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.328277 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.328295 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.328326 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.328346 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.432202 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.432308 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.432332 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.432362 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.432395 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.535260 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.535393 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.535434 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.535466 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.535493 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.638356 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.638409 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.638428 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.638452 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.638473 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.738585 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:42 crc kubenswrapper[5118]: E1208 19:30:42.738938 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:30:46.738885025 +0000 UTC m=+99.031730482 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.739103 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.739177 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.739274 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:42 crc kubenswrapper[5118]: E1208 19:30:42.739307 5118 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.739358 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:42 crc kubenswrapper[5118]: E1208 19:30:42.739429 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:46.739391028 +0000 UTC m=+99.032236525 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:42 crc kubenswrapper[5118]: E1208 19:30:42.739588 5118 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:42 crc kubenswrapper[5118]: E1208 19:30:42.739596 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:42 crc kubenswrapper[5118]: E1208 19:30:42.739763 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:46.739677157 +0000 UTC m=+99.032522774 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:42 crc kubenswrapper[5118]: E1208 19:30:42.739768 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:42 crc kubenswrapper[5118]: E1208 19:30:42.739787 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:42 crc kubenswrapper[5118]: E1208 19:30:42.739803 5118 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:42 crc kubenswrapper[5118]: E1208 19:30:42.739810 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:42 crc kubenswrapper[5118]: E1208 19:30:42.739832 5118 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:42 crc kubenswrapper[5118]: E1208 19:30:42.739915 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:46.739884922 +0000 UTC m=+99.032730429 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:42 crc kubenswrapper[5118]: E1208 19:30:42.739960 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:46.739938584 +0000 UTC m=+99.032784081 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.741201 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.741242 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.741253 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.741275 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.741290 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.840207 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs\") pod \"network-metrics-daemon-qmvkf\" (UID: \"b9693139-63f6-471e-ae19-744460a6b114\") " pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:42 crc kubenswrapper[5118]: E1208 19:30:42.840996 5118 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:42 crc kubenswrapper[5118]: E1208 19:30:42.841250 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs podName:b9693139-63f6-471e-ae19-744460a6b114 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:46.841212919 +0000 UTC m=+99.134058406 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs") pod "network-metrics-daemon-qmvkf" (UID: "b9693139-63f6-471e-ae19-744460a6b114") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.843337 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.843402 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.843426 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.843455 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.843480 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.946230 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.946299 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.946325 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.946354 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5118]: I1208 19:30:42.946375 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.048970 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.049046 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.049070 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.049099 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.049122 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.095942 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.096130 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:43 crc kubenswrapper[5118]: E1208 19:30:43.096133 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.096021 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.096300 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:43 crc kubenswrapper[5118]: E1208 19:30:43.096260 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:43 crc kubenswrapper[5118]: E1208 19:30:43.096393 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qmvkf" podUID="b9693139-63f6-471e-ae19-744460a6b114" Dec 08 19:30:43 crc kubenswrapper[5118]: E1208 19:30:43.096531 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.151952 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.151998 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.152007 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.152021 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.152031 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.254888 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.254951 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.254968 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.254994 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.255011 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.357743 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.357811 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.357828 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.357850 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.357866 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.460527 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.460576 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.460590 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.460609 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.460625 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.563246 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.563298 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.563309 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.563324 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.563334 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.666390 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.666469 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.666492 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.666525 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.666549 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.768936 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.769006 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.769030 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.769058 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.769082 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.872449 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.872521 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.872541 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.872571 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.872616 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.887529 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.887596 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.887614 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.887639 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.887656 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5118]: E1208 19:30:43.905317 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80ade9b2-160d-493f-aadd-1db6165f9646\\\",\\\"systemUUID\\\":\\\"38ff36e9-ea31-4d0f-b411-1d90f601ae3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.909994 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.910068 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.910088 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.910116 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.910135 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5118]: E1208 19:30:43.924180 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80ade9b2-160d-493f-aadd-1db6165f9646\\\",\\\"systemUUID\\\":\\\"38ff36e9-ea31-4d0f-b411-1d90f601ae3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.928532 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.928609 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.928638 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.928671 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.928803 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5118]: E1208 19:30:43.946639 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80ade9b2-160d-493f-aadd-1db6165f9646\\\",\\\"systemUUID\\\":\\\"38ff36e9-ea31-4d0f-b411-1d90f601ae3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.950924 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.951009 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.951030 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.951097 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.951118 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5118]: E1208 19:30:43.967429 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80ade9b2-160d-493f-aadd-1db6165f9646\\\",\\\"systemUUID\\\":\\\"38ff36e9-ea31-4d0f-b411-1d90f601ae3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.972179 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.972268 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.972282 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.972300 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.972312 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5118]: E1208 19:30:43.986945 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80ade9b2-160d-493f-aadd-1db6165f9646\\\",\\\"systemUUID\\\":\\\"38ff36e9-ea31-4d0f-b411-1d90f601ae3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:43 crc kubenswrapper[5118]: E1208 19:30:43.987130 5118 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.989122 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.989160 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.989173 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.989191 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5118]: I1208 19:30:43.989208 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.093151 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.093217 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.093238 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.093264 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.093288 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:44Z","lastTransitionTime":"2025-12-08T19:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.195982 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.196047 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.196081 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.196108 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.196132 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:44Z","lastTransitionTime":"2025-12-08T19:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.298906 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.298995 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.299015 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.299041 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.299059 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:44Z","lastTransitionTime":"2025-12-08T19:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.401613 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.401706 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.401723 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.401746 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.401761 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:44Z","lastTransitionTime":"2025-12-08T19:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.503984 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.504277 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.504344 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.504416 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.504487 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:44Z","lastTransitionTime":"2025-12-08T19:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.606894 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.606936 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.606947 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.606963 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.606973 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:44Z","lastTransitionTime":"2025-12-08T19:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.709546 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.709725 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.709747 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.709770 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.709792 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:44Z","lastTransitionTime":"2025-12-08T19:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.813265 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.813351 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.813375 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.813409 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.813444 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:44Z","lastTransitionTime":"2025-12-08T19:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.915943 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.916002 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.916017 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.916038 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:44 crc kubenswrapper[5118]: I1208 19:30:44.916051 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:44Z","lastTransitionTime":"2025-12-08T19:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.018068 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.018144 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.018173 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.018209 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.018233 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:45Z","lastTransitionTime":"2025-12-08T19:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.095491 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.095595 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.095673 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.095648 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:45 crc kubenswrapper[5118]: E1208 19:30:45.095858 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:45 crc kubenswrapper[5118]: E1208 19:30:45.096107 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qmvkf" podUID="b9693139-63f6-471e-ae19-744460a6b114" Dec 08 19:30:45 crc kubenswrapper[5118]: E1208 19:30:45.096234 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:45 crc kubenswrapper[5118]: E1208 19:30:45.096367 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.121402 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.121477 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.121491 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.121511 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.121525 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:45Z","lastTransitionTime":"2025-12-08T19:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.224383 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.224441 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.224454 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.224473 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.224488 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:45Z","lastTransitionTime":"2025-12-08T19:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.327546 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.327613 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.327632 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.327658 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.327676 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:45Z","lastTransitionTime":"2025-12-08T19:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.430383 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.430444 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.430460 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.430484 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.430499 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:45Z","lastTransitionTime":"2025-12-08T19:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.533441 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.533514 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.533533 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.533560 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.533579 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:45Z","lastTransitionTime":"2025-12-08T19:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.636674 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.636750 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.636759 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.636775 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.636787 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:45Z","lastTransitionTime":"2025-12-08T19:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.739977 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.740032 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.740048 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.740067 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.740084 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:45Z","lastTransitionTime":"2025-12-08T19:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.841994 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.842041 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.842053 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.842070 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.842080 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:45Z","lastTransitionTime":"2025-12-08T19:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.944684 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.944743 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.944754 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.944770 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:45 crc kubenswrapper[5118]: I1208 19:30:45.944779 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:45Z","lastTransitionTime":"2025-12-08T19:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.047745 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.047788 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.047797 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.047811 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.047820 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:46Z","lastTransitionTime":"2025-12-08T19:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.150283 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.150327 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.150361 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.150379 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.150389 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:46Z","lastTransitionTime":"2025-12-08T19:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.253512 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.253568 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.253593 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.253625 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.253648 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:46Z","lastTransitionTime":"2025-12-08T19:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.355934 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.356006 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.356024 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.356052 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.356074 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:46Z","lastTransitionTime":"2025-12-08T19:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.458718 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.458772 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.458786 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.458804 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.458817 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:46Z","lastTransitionTime":"2025-12-08T19:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.561042 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.561361 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.561381 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.561404 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.561422 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:46Z","lastTransitionTime":"2025-12-08T19:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.664055 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.664113 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.664131 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.664153 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.664170 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:46Z","lastTransitionTime":"2025-12-08T19:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.767015 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.767077 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.767095 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.767120 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.767138 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:46Z","lastTransitionTime":"2025-12-08T19:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.785056 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:46 crc kubenswrapper[5118]: E1208 19:30:46.785222 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:30:54.78518655 +0000 UTC m=+107.078032037 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.785339 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.785430 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.785466 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:46 crc kubenswrapper[5118]: E1208 19:30:46.785568 5118 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:46 crc kubenswrapper[5118]: E1208 19:30:46.785583 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:46 crc kubenswrapper[5118]: E1208 19:30:46.785614 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:46 crc kubenswrapper[5118]: E1208 19:30:46.785630 5118 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:46 crc kubenswrapper[5118]: E1208 19:30:46.785646 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:54.785627002 +0000 UTC m=+107.078472499 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:46 crc kubenswrapper[5118]: E1208 19:30:46.785732 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:54.785709094 +0000 UTC m=+107.078554561 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.785588 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:46 crc kubenswrapper[5118]: E1208 19:30:46.785792 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:46 crc kubenswrapper[5118]: E1208 19:30:46.785806 5118 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:46 crc kubenswrapper[5118]: E1208 19:30:46.785829 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:46 crc kubenswrapper[5118]: E1208 19:30:46.785854 5118 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:46 crc kubenswrapper[5118]: E1208 19:30:46.785926 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:54.785897519 +0000 UTC m=+107.078743126 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:46 crc kubenswrapper[5118]: E1208 19:30:46.785966 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:54.7859488 +0000 UTC m=+107.078794497 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.870640 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.870772 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.870803 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.870834 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.870855 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:46Z","lastTransitionTime":"2025-12-08T19:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.886624 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs\") pod \"network-metrics-daemon-qmvkf\" (UID: \"b9693139-63f6-471e-ae19-744460a6b114\") " pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:46 crc kubenswrapper[5118]: E1208 19:30:46.886902 5118 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:46 crc kubenswrapper[5118]: E1208 19:30:46.887035 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs podName:b9693139-63f6-471e-ae19-744460a6b114 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:54.886998321 +0000 UTC m=+107.179843918 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs") pod "network-metrics-daemon-qmvkf" (UID: "b9693139-63f6-471e-ae19-744460a6b114") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.973728 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.973790 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.973802 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.973821 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:46 crc kubenswrapper[5118]: I1208 19:30:46.973831 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:46Z","lastTransitionTime":"2025-12-08T19:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.076339 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.076422 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.076438 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.076465 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.076482 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:47Z","lastTransitionTime":"2025-12-08T19:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.096188 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.096241 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:47 crc kubenswrapper[5118]: E1208 19:30:47.096397 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qmvkf" podUID="b9693139-63f6-471e-ae19-744460a6b114" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.096412 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:47 crc kubenswrapper[5118]: E1208 19:30:47.096544 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.096628 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:47 crc kubenswrapper[5118]: E1208 19:30:47.096646 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:47 crc kubenswrapper[5118]: E1208 19:30:47.096719 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.178909 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.178988 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.179003 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.179031 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.179049 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:47Z","lastTransitionTime":"2025-12-08T19:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.281516 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.281594 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.281613 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.281628 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.281648 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:47Z","lastTransitionTime":"2025-12-08T19:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.383977 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.384019 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.384031 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.384053 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.384066 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:47Z","lastTransitionTime":"2025-12-08T19:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.487036 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.487116 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.487130 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.487149 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.487160 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:47Z","lastTransitionTime":"2025-12-08T19:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.590032 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.590114 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.590139 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.590163 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.590182 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:47Z","lastTransitionTime":"2025-12-08T19:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.692823 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.692902 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.692941 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.692971 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.693172 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:47Z","lastTransitionTime":"2025-12-08T19:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.796406 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.796468 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.796488 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.796510 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.796528 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:47Z","lastTransitionTime":"2025-12-08T19:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.899520 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.899583 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.899595 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.899615 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:47 crc kubenswrapper[5118]: I1208 19:30:47.899627 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:47Z","lastTransitionTime":"2025-12-08T19:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.001728 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.001771 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.001783 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.001800 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.001811 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:48Z","lastTransitionTime":"2025-12-08T19:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.104701 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.104742 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.104752 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.104764 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.104773 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:48Z","lastTransitionTime":"2025-12-08T19:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.113633 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qmvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9693139-63f6-471e-ae19-744460a6b114\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c652h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c652h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qmvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.129106 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc62458c-133b-4909-91ab-b28870b78816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h27mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h27mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-r2hg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.144442 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ede7832-cf65-41a7-bd5d-aced161a948b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5f4b44aaf2cdcfc560006f179f3c73d2f8d9096fca618c7ca57c8230fd49c15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a11a3b45932159b09f23c3607b243fe3b8c6ed6ef187c042f0cf0a14d0942ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11a3b45932159b09f23c3607b243fe3b8c6ed6ef187c042f0cf0a14d0942ec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.157557 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732d7dd3-9bf4-4e4c-9583-7cc2d66a273c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://f96b5c895f4872869c8afd92cd4a2f5eb829c355a2e35dc83c6741c426f42ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ed5e8a16f16345b28c7907efe04e4b3856cbade55bdb538fc7f3790a7e71d583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://027e114c2682c800b54ad673ffaf9a3e6d2e4b1b44a3395f348dfc94c54ddc30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://638dff118a255984c06222c27a23f2f72f75f5f45043827e4866fd6e5ad9efa6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.167884 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.179821 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.192316 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-j4b8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87shs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-j4b8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.207141 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.207196 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.207212 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.207234 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.207248 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:48Z","lastTransitionTime":"2025-12-08T19:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.210497 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6klf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.225620 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe5ad69b-e87f-4884-afa1-9f57df6393b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c34e38756564d5facd2424d608df2958fd9546536f3c41cac83e9bfd12c30913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad051eb042181f65b6862f8f0f09916b05c9fcd8e66d8642c1f86ae78267d1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4115e50a4084c607cd9530d3ab0e2b96fee8dbc9af125d400209816dc621f62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4612f6ac3bc67751af221e89ef805795a861f7191d624aed54574e54938eb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4612f6ac3bc67751af221e89ef805795a861f7191d624aed54574e54938eb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.239465 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2d8304-0772-47e0-8c2d-ed33f18c6dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://77550c63414c65bb016bee1a9d97583c8688d5089b08f1e5d15d794eea50fea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b7f60e5988ac995422f8474669dd797d1a34954667133db76927b3e50a6ffd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://544909c828d1ef83c8368679ae033c6c9d29d544f7fd68216170667e54678dfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:10Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 19:30:10.058099 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:10.058228 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:10.058973 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1331124983/tls.crt::/tmp/serving-cert-1331124983/tls.key\\\\\\\"\\\\nI1208 19:30:10.478364 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:10.481607 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:10.481639 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:10.481678 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:10.481710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:10.486171 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:10.486196 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:10.486240 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:10.486253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:10.486259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:10.486262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:10.486266 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:10.486269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:10.488953 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4817842e9634a68433d70a28cd0b33c1b59095c64add0a62c2b6fd0ab631f1d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.252487 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.271076 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xg8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.282359 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-fp8c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ab4495-4d65-4b3e-9a3d-bfaad21f506a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96pbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fp8c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.293524 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7g24j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"688024c3-8b6c-450e-a7b2-3b3165438f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkhpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7g24j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.302854 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0052f7cb-2eab-42e7-8f98-b1544811d9c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7pfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7pfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-twnt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.309602 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.309665 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.309705 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.309732 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.309751 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:48Z","lastTransitionTime":"2025-12-08T19:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.326661 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56a13789-0247-4d3a-9b22-6f0bc1a77b2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://16184f6f1982588e4aacf024dca32892985c428914dfab58baf03a3e0a296cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://87524e06886d743364c19bf1d1cbd1e8c7e9be19424206ec6b49d02a770729ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9502f9a9fc4385a11375d1454dc563a79e935d00cf846d1cba59363a82cdebf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2e23a47c43a333a7dbc87ffbd2d9968813080ef443b1706e946996bd22bd6785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://c054019c6129f78ea8bc4f9abd8a9cb3f052c4b135ce01e75b822c97ba27de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://279ee597b69db21a603f5b68418bf5407320d57b51c75b7dc9f56776ec109ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279ee597b69db21a603f5b68418bf5407320d57b51c75b7dc9f56776ec109ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://efd7bb6b3cbab052623ca5755b789764a6909190a393b9d46358d177a45dee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd7bb6b3cbab052623ca5755b789764a6909190a393b9d46358d177a45dee74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece9b7fc3121bc4f834fa2e993f2066c32baa9563e15facc11ca12149390fd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece9b7fc3121bc4f834fa2e993f2066c32baa9563e15facc11ca12149390fd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.338985 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.355529 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.369798 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.412410 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.412830 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.412974 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.413128 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.413263 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:48Z","lastTransitionTime":"2025-12-08T19:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.515034 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.515377 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.515476 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.515570 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.515675 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:48Z","lastTransitionTime":"2025-12-08T19:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.618057 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.618094 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.618104 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.618120 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.618131 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:48Z","lastTransitionTime":"2025-12-08T19:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.721054 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.721122 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.721143 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.721169 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.721188 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:48Z","lastTransitionTime":"2025-12-08T19:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.756454 5118 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.823179 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.823272 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.823335 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.823369 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.823389 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:48Z","lastTransitionTime":"2025-12-08T19:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.926357 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.926437 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.926451 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.926473 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:48 crc kubenswrapper[5118]: I1208 19:30:48.926486 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:48Z","lastTransitionTime":"2025-12-08T19:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.029236 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.029290 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.029303 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.029322 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.029334 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:49Z","lastTransitionTime":"2025-12-08T19:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.095545 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:49 crc kubenswrapper[5118]: E1208 19:30:49.095778 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qmvkf" podUID="b9693139-63f6-471e-ae19-744460a6b114" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.095965 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:49 crc kubenswrapper[5118]: E1208 19:30:49.096043 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.096058 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.096100 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:49 crc kubenswrapper[5118]: E1208 19:30:49.096163 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:49 crc kubenswrapper[5118]: E1208 19:30:49.096460 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.131560 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.131719 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.131735 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.131752 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.131768 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:49Z","lastTransitionTime":"2025-12-08T19:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.234885 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.234972 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.234999 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.235031 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.235060 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:49Z","lastTransitionTime":"2025-12-08T19:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.338224 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.338277 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.338287 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.338304 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.338317 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:49Z","lastTransitionTime":"2025-12-08T19:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.440962 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.441032 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.441052 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.441079 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.441103 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:49Z","lastTransitionTime":"2025-12-08T19:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.543397 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.543468 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.543490 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.543515 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.543532 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:49Z","lastTransitionTime":"2025-12-08T19:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.646405 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.646476 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.646493 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.646513 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.646524 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:49Z","lastTransitionTime":"2025-12-08T19:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.749409 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.749493 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.749522 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.749554 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.749577 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:49Z","lastTransitionTime":"2025-12-08T19:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.852097 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.852170 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.852190 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.852215 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.852232 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:49Z","lastTransitionTime":"2025-12-08T19:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.954885 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.954954 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.954973 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.954996 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:49 crc kubenswrapper[5118]: I1208 19:30:49.955013 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:49Z","lastTransitionTime":"2025-12-08T19:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.056947 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.056986 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.056996 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.057011 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.057021 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:50Z","lastTransitionTime":"2025-12-08T19:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.159281 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.159532 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.159549 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.159564 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.159573 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:50Z","lastTransitionTime":"2025-12-08T19:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.261498 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.261550 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.261560 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.261576 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.261585 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:50Z","lastTransitionTime":"2025-12-08T19:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.364556 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.364623 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.364634 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.364647 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.364657 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:50Z","lastTransitionTime":"2025-12-08T19:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.466969 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.467031 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.467047 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.467067 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.467080 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:50Z","lastTransitionTime":"2025-12-08T19:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.521937 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" event={"ID":"aa21ead9-3381-422c-b52e-4a10a3ed1bd4","Type":"ContainerStarted","Data":"a6b054516838b94cec4524a69f84c57ef816ff2ca7d3c41626cbb673ff8b5c30"} Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.524969 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"c3fcf930cb5362c959758c5d023173bfce28069a399f5e6c54eae319d6862faf"} Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.537647 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-fp8c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ab4495-4d65-4b3e-9a3d-bfaad21f506a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96pbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fp8c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.549493 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7g24j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"688024c3-8b6c-450e-a7b2-3b3165438f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkhpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7g24j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.562051 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0052f7cb-2eab-42e7-8f98-b1544811d9c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7pfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7pfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-twnt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.569982 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.570040 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.570051 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.570069 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.570082 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:50Z","lastTransitionTime":"2025-12-08T19:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.580933 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56a13789-0247-4d3a-9b22-6f0bc1a77b2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://16184f6f1982588e4aacf024dca32892985c428914dfab58baf03a3e0a296cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://87524e06886d743364c19bf1d1cbd1e8c7e9be19424206ec6b49d02a770729ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9502f9a9fc4385a11375d1454dc563a79e935d00cf846d1cba59363a82cdebf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2e23a47c43a333a7dbc87ffbd2d9968813080ef443b1706e946996bd22bd6785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://c054019c6129f78ea8bc4f9abd8a9cb3f052c4b135ce01e75b822c97ba27de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://279ee597b69db21a603f5b68418bf5407320d57b51c75b7dc9f56776ec109ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279ee597b69db21a603f5b68418bf5407320d57b51c75b7dc9f56776ec109ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://efd7bb6b3cbab052623ca5755b789764a6909190a393b9d46358d177a45dee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd7bb6b3cbab052623ca5755b789764a6909190a393b9d46358d177a45dee74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece9b7fc3121bc4f834fa2e993f2066c32baa9563e15facc11ca12149390fd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece9b7fc3121bc4f834fa2e993f2066c32baa9563e15facc11ca12149390fd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.594923 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.614092 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.634922 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.654558 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qmvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9693139-63f6-471e-ae19-744460a6b114\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c652h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c652h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qmvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.671200 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc62458c-133b-4909-91ab-b28870b78816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h27mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h27mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-r2hg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.672727 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.672772 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.672789 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.672809 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.672823 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:50Z","lastTransitionTime":"2025-12-08T19:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.685146 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ede7832-cf65-41a7-bd5d-aced161a948b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5f4b44aaf2cdcfc560006f179f3c73d2f8d9096fca618c7ca57c8230fd49c15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a11a3b45932159b09f23c3607b243fe3b8c6ed6ef187c042f0cf0a14d0942ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11a3b45932159b09f23c3607b243fe3b8c6ed6ef187c042f0cf0a14d0942ec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.702542 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732d7dd3-9bf4-4e4c-9583-7cc2d66a273c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://f96b5c895f4872869c8afd92cd4a2f5eb829c355a2e35dc83c6741c426f42ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ed5e8a16f16345b28c7907efe04e4b3856cbade55bdb538fc7f3790a7e71d583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://027e114c2682c800b54ad673ffaf9a3e6d2e4b1b44a3395f348dfc94c54ddc30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://638dff118a255984c06222c27a23f2f72f75f5f45043827e4866fd6e5ad9efa6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.716641 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.729474 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.742115 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-j4b8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87shs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-j4b8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.757437 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6klf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.771416 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe5ad69b-e87f-4884-afa1-9f57df6393b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c34e38756564d5facd2424d608df2958fd9546536f3c41cac83e9bfd12c30913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad051eb042181f65b6862f8f0f09916b05c9fcd8e66d8642c1f86ae78267d1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4115e50a4084c607cd9530d3ab0e2b96fee8dbc9af125d400209816dc621f62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4612f6ac3bc67751af221e89ef805795a861f7191d624aed54574e54938eb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4612f6ac3bc67751af221e89ef805795a861f7191d624aed54574e54938eb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.775636 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.775716 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.775736 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.775760 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.775775 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:50Z","lastTransitionTime":"2025-12-08T19:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.784949 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2d8304-0772-47e0-8c2d-ed33f18c6dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://77550c63414c65bb016bee1a9d97583c8688d5089b08f1e5d15d794eea50fea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b7f60e5988ac995422f8474669dd797d1a34954667133db76927b3e50a6ffd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://544909c828d1ef83c8368679ae033c6c9d29d544f7fd68216170667e54678dfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:10Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 19:30:10.058099 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:10.058228 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:10.058973 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1331124983/tls.crt::/tmp/serving-cert-1331124983/tls.key\\\\\\\"\\\\nI1208 19:30:10.478364 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:10.481607 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:10.481639 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:10.481678 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:10.481710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:10.486171 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:10.486196 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:10.486240 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:10.486253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:10.486259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:10.486262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:10.486266 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:10.486269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:10.488953 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4817842e9634a68433d70a28cd0b33c1b59095c64add0a62c2b6fd0ab631f1d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.800592 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.819941 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b054516838b94cec4524a69f84c57ef816ff2ca7d3c41626cbb673ff8b5c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xg8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.831422 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qmvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9693139-63f6-471e-ae19-744460a6b114\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c652h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c652h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qmvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.842006 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc62458c-133b-4909-91ab-b28870b78816\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h27mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h27mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-r2hg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.854146 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ede7832-cf65-41a7-bd5d-aced161a948b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5f4b44aaf2cdcfc560006f179f3c73d2f8d9096fca618c7ca57c8230fd49c15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a11a3b45932159b09f23c3607b243fe3b8c6ed6ef187c042f0cf0a14d0942ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11a3b45932159b09f23c3607b243fe3b8c6ed6ef187c042f0cf0a14d0942ec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.870078 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732d7dd3-9bf4-4e4c-9583-7cc2d66a273c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://f96b5c895f4872869c8afd92cd4a2f5eb829c355a2e35dc83c6741c426f42ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ed5e8a16f16345b28c7907efe04e4b3856cbade55bdb538fc7f3790a7e71d583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://027e114c2682c800b54ad673ffaf9a3e6d2e4b1b44a3395f348dfc94c54ddc30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://638dff118a255984c06222c27a23f2f72f75f5f45043827e4866fd6e5ad9efa6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.878568 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.878941 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.879118 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.879246 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.879357 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:50Z","lastTransitionTime":"2025-12-08T19:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.880678 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.892730 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.906739 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-j4b8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87shs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-j4b8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.923219 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6klf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.935764 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe5ad69b-e87f-4884-afa1-9f57df6393b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c34e38756564d5facd2424d608df2958fd9546536f3c41cac83e9bfd12c30913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad051eb042181f65b6862f8f0f09916b05c9fcd8e66d8642c1f86ae78267d1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4115e50a4084c607cd9530d3ab0e2b96fee8dbc9af125d400209816dc621f62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4612f6ac3bc67751af221e89ef805795a861f7191d624aed54574e54938eb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4612f6ac3bc67751af221e89ef805795a861f7191d624aed54574e54938eb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.952454 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2d8304-0772-47e0-8c2d-ed33f18c6dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://77550c63414c65bb016bee1a9d97583c8688d5089b08f1e5d15d794eea50fea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b7f60e5988ac995422f8474669dd797d1a34954667133db76927b3e50a6ffd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://544909c828d1ef83c8368679ae033c6c9d29d544f7fd68216170667e54678dfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:10Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 19:30:10.058099 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:10.058228 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:10.058973 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1331124983/tls.crt::/tmp/serving-cert-1331124983/tls.key\\\\\\\"\\\\nI1208 19:30:10.478364 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:10.481607 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:10.481639 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:10.481678 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:10.481710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:10.486171 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:10.486196 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:10.486240 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:10.486253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:10.486259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:10.486262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:10.486266 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:10.486269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:10.488953 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4817842e9634a68433d70a28cd0b33c1b59095c64add0a62c2b6fd0ab631f1d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.964254 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3fcf930cb5362c959758c5d023173bfce28069a399f5e6c54eae319d6862faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.977273 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b054516838b94cec4524a69f84c57ef816ff2ca7d3c41626cbb673ff8b5c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xg8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.981320 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.981499 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.981598 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.981679 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.981781 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:50Z","lastTransitionTime":"2025-12-08T19:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.988611 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-fp8c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ab4495-4d65-4b3e-9a3d-bfaad21f506a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96pbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fp8c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5118]: I1208 19:30:50.996982 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7g24j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"688024c3-8b6c-450e-a7b2-3b3165438f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkhpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7g24j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.006052 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0052f7cb-2eab-42e7-8f98-b1544811d9c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7pfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7pfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-twnt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.026406 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56a13789-0247-4d3a-9b22-6f0bc1a77b2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://16184f6f1982588e4aacf024dca32892985c428914dfab58baf03a3e0a296cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://87524e06886d743364c19bf1d1cbd1e8c7e9be19424206ec6b49d02a770729ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9502f9a9fc4385a11375d1454dc563a79e935d00cf846d1cba59363a82cdebf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2e23a47c43a333a7dbc87ffbd2d9968813080ef443b1706e946996bd22bd6785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://c054019c6129f78ea8bc4f9abd8a9cb3f052c4b135ce01e75b822c97ba27de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://279ee597b69db21a603f5b68418bf5407320d57b51c75b7dc9f56776ec109ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279ee597b69db21a603f5b68418bf5407320d57b51c75b7dc9f56776ec109ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://efd7bb6b3cbab052623ca5755b789764a6909190a393b9d46358d177a45dee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd7bb6b3cbab052623ca5755b789764a6909190a393b9d46358d177a45dee74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece9b7fc3121bc4f834fa2e993f2066c32baa9563e15facc11ca12149390fd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece9b7fc3121bc4f834fa2e993f2066c32baa9563e15facc11ca12149390fd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.038741 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.051765 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.061741 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.084305 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.084368 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.084380 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.084399 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.084411 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:51Z","lastTransitionTime":"2025-12-08T19:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.095727 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.095970 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:51 crc kubenswrapper[5118]: E1208 19:30:51.096003 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:51 crc kubenswrapper[5118]: E1208 19:30:51.096063 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qmvkf" podUID="b9693139-63f6-471e-ae19-744460a6b114" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.096102 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:51 crc kubenswrapper[5118]: E1208 19:30:51.096165 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.096207 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:51 crc kubenswrapper[5118]: E1208 19:30:51.096291 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.188114 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.188632 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.188647 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.188671 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.188704 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:51Z","lastTransitionTime":"2025-12-08T19:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.294349 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.294388 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.294449 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.294467 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.294479 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:51Z","lastTransitionTime":"2025-12-08T19:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.396556 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.396609 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.396623 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.396640 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.396651 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:51Z","lastTransitionTime":"2025-12-08T19:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.499331 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.499379 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.499392 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.499409 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.499425 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:51Z","lastTransitionTime":"2025-12-08T19:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.529150 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fp8c5" event={"ID":"86ab4495-4d65-4b3e-9a3d-bfaad21f506a","Type":"ContainerStarted","Data":"b0d73dbbfe74481807b52e267fa22a38cdd0ceaa65f63154029ae7df357914f7"} Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.531413 5118 generic.go:358] "Generic (PLEG): container finished" podID="aa21ead9-3381-422c-b52e-4a10a3ed1bd4" containerID="a6b054516838b94cec4524a69f84c57ef816ff2ca7d3c41626cbb673ff8b5c30" exitCode=0 Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.531500 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" event={"ID":"aa21ead9-3381-422c-b52e-4a10a3ed1bd4","Type":"ContainerDied","Data":"a6b054516838b94cec4524a69f84c57ef816ff2ca7d3c41626cbb673ff8b5c30"} Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.534716 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"08f572858484c3d07bbb95addbe3534f80a30948cb31131bd36bd182fe32df21"} Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.534783 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"9ea4a8e2d9783c685fc34ec547a7ade8ebd654db290710e5872c678ddc1d27f9"} Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.544196 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.554452 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-j4b8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87shs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-j4b8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.571985 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6klf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.583205 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe5ad69b-e87f-4884-afa1-9f57df6393b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c34e38756564d5facd2424d608df2958fd9546536f3c41cac83e9bfd12c30913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad051eb042181f65b6862f8f0f09916b05c9fcd8e66d8642c1f86ae78267d1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4115e50a4084c607cd9530d3ab0e2b96fee8dbc9af125d400209816dc621f62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4612f6ac3bc67751af221e89ef805795a861f7191d624aed54574e54938eb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4612f6ac3bc67751af221e89ef805795a861f7191d624aed54574e54938eb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.597926 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2d8304-0772-47e0-8c2d-ed33f18c6dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://77550c63414c65bb016bee1a9d97583c8688d5089b08f1e5d15d794eea50fea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b7f60e5988ac995422f8474669dd797d1a34954667133db76927b3e50a6ffd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://544909c828d1ef83c8368679ae033c6c9d29d544f7fd68216170667e54678dfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:10Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 19:30:10.058099 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:10.058228 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:10.058973 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1331124983/tls.crt::/tmp/serving-cert-1331124983/tls.key\\\\\\\"\\\\nI1208 19:30:10.478364 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:10.481607 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:10.481639 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:10.481678 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:10.481710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:10.486171 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:10.486196 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:10.486240 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:10.486253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:10.486259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:10.486262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:10.486266 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:10.486269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:10.488953 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4817842e9634a68433d70a28cd0b33c1b59095c64add0a62c2b6fd0ab631f1d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.602057 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.602117 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.602141 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.602160 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.602171 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:51Z","lastTransitionTime":"2025-12-08T19:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.614870 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3fcf930cb5362c959758c5d023173bfce28069a399f5e6c54eae319d6862faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.632330 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa21ead9-3381-422c-b52e-4a10a3ed1bd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b054516838b94cec4524a69f84c57ef816ff2ca7d3c41626cbb673ff8b5c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r955p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xg8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.646280 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-fp8c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ab4495-4d65-4b3e-9a3d-bfaad21f506a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0d73dbbfe74481807b52e267fa22a38cdd0ceaa65f63154029ae7df357914f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96pbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fp8c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.654168 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7g24j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"688024c3-8b6c-450e-a7b2-3b3165438f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkhpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7g24j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.665921 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0052f7cb-2eab-42e7-8f98-b1544811d9c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7pfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7pfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-twnt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.704415 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.704477 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.704487 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.704501 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.704534 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:51Z","lastTransitionTime":"2025-12-08T19:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.724231 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=13.724214218 podStartE2EDuration="13.724214218s" podCreationTimestamp="2025-12-08 19:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:51.708841308 +0000 UTC m=+104.001686785" watchObservedRunningTime="2025-12-08 19:30:51.724214218 +0000 UTC m=+104.017059675" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.801062 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=13.801040857 podStartE2EDuration="13.801040857s" podCreationTimestamp="2025-12-08 19:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:51.799743711 +0000 UTC m=+104.092589178" watchObservedRunningTime="2025-12-08 19:30:51.801040857 +0000 UTC m=+104.093886324" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.806200 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.806229 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.806237 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.806248 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.806258 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:51Z","lastTransitionTime":"2025-12-08T19:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.819564 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=13.819545322 podStartE2EDuration="13.819545322s" podCreationTimestamp="2025-12-08 19:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:51.818960546 +0000 UTC m=+104.111806003" watchObservedRunningTime="2025-12-08 19:30:51.819545322 +0000 UTC m=+104.112390779" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.908973 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.909396 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.909462 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.909535 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.909596 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:51Z","lastTransitionTime":"2025-12-08T19:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.929928 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=13.929899066 podStartE2EDuration="13.929899066s" podCreationTimestamp="2025-12-08 19:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:51.912589814 +0000 UTC m=+104.205435271" watchObservedRunningTime="2025-12-08 19:30:51.929899066 +0000 UTC m=+104.222744523" Dec 08 19:30:51 crc kubenswrapper[5118]: I1208 19:30:51.991401 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-fp8c5" podStartSLOduration=84.991379945 podStartE2EDuration="1m24.991379945s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:51.980630652 +0000 UTC m=+104.273476109" watchObservedRunningTime="2025-12-08 19:30:51.991379945 +0000 UTC m=+104.284225402" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.011885 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.011924 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.011934 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.011948 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.011960 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:52Z","lastTransitionTime":"2025-12-08T19:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.115095 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.115158 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.115171 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.115191 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.115206 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:52Z","lastTransitionTime":"2025-12-08T19:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.217489 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.217536 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.217547 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.217564 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.217577 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:52Z","lastTransitionTime":"2025-12-08T19:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.342303 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.342395 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.342407 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.342427 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.342438 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:52Z","lastTransitionTime":"2025-12-08T19:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.444328 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.444375 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.444399 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.444416 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.444426 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:52Z","lastTransitionTime":"2025-12-08T19:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.539979 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" event={"ID":"fc62458c-133b-4909-91ab-b28870b78816","Type":"ContainerStarted","Data":"3c76de142b2f857046a2b0c4f36c88cf22b35c02d03f8513c826780f31014de4"} Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.540026 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" event={"ID":"fc62458c-133b-4909-91ab-b28870b78816","Type":"ContainerStarted","Data":"ebb7bab4f88ec8dba2d4335caae7a71c141f61dd649a4738a19dac43d9570695"} Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.543058 5118 generic.go:358] "Generic (PLEG): container finished" podID="aa21ead9-3381-422c-b52e-4a10a3ed1bd4" containerID="7b3c43a0a42a23466bcca1c2a3c508cfe553ef310e229c227c0aac2aeca9daed" exitCode=0 Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.543146 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" event={"ID":"aa21ead9-3381-422c-b52e-4a10a3ed1bd4","Type":"ContainerDied","Data":"7b3c43a0a42a23466bcca1c2a3c508cfe553ef310e229c227c0aac2aeca9daed"} Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.546033 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.546072 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.546084 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.546102 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.546115 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:52Z","lastTransitionTime":"2025-12-08T19:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.559226 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" podStartSLOduration=84.559205925 podStartE2EDuration="1m24.559205925s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:52.558082094 +0000 UTC m=+104.850927571" watchObservedRunningTime="2025-12-08 19:30:52.559205925 +0000 UTC m=+104.852051392" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.649596 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.649634 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.649669 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.649968 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.650880 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:52Z","lastTransitionTime":"2025-12-08T19:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.753062 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.753113 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.753126 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.753141 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.753152 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:52Z","lastTransitionTime":"2025-12-08T19:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.855211 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.855247 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.855256 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.855270 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.855279 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:52Z","lastTransitionTime":"2025-12-08T19:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.957526 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.957565 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.957573 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.957588 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:52 crc kubenswrapper[5118]: I1208 19:30:52.957599 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:52Z","lastTransitionTime":"2025-12-08T19:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.059562 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.059620 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.059637 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.059661 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.059678 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:53Z","lastTransitionTime":"2025-12-08T19:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.096313 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.096331 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:53 crc kubenswrapper[5118]: E1208 19:30:53.096514 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.096561 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:53 crc kubenswrapper[5118]: E1208 19:30:53.096750 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.096801 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:53 crc kubenswrapper[5118]: E1208 19:30:53.096944 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qmvkf" podUID="b9693139-63f6-471e-ae19-744460a6b114" Dec 08 19:30:53 crc kubenswrapper[5118]: E1208 19:30:53.097558 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.098415 5118 scope.go:117] "RemoveContainer" containerID="cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.163178 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.163236 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.163248 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.163268 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.163280 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:53Z","lastTransitionTime":"2025-12-08T19:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.266233 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.266285 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.266332 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.266355 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.266366 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:53Z","lastTransitionTime":"2025-12-08T19:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.368823 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.368865 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.368877 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.368890 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.368900 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:53Z","lastTransitionTime":"2025-12-08T19:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.471785 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.471842 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.471860 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.471883 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.471902 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:53Z","lastTransitionTime":"2025-12-08T19:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.549086 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.551293 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"1bd91bbd0987e709fe88c3fde86f962659c94e69337c753ce7e644582a437544"} Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.552277 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.556130 5118 generic.go:358] "Generic (PLEG): container finished" podID="aa21ead9-3381-422c-b52e-4a10a3ed1bd4" containerID="d059ca18e5cd23f0b862a2f672cb4df6903b758a85a72e67272e1211dccdd979" exitCode=0 Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.556163 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" event={"ID":"aa21ead9-3381-422c-b52e-4a10a3ed1bd4","Type":"ContainerDied","Data":"d059ca18e5cd23f0b862a2f672cb4df6903b758a85a72e67272e1211dccdd979"} Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.558049 5118 generic.go:358] "Generic (PLEG): container finished" podID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerID="8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed" exitCode=0 Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.558089 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" event={"ID":"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6","Type":"ContainerDied","Data":"8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed"} Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.574568 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.574626 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.574645 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.574670 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.574701 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:53Z","lastTransitionTime":"2025-12-08T19:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.613231 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.613206323 podStartE2EDuration="15.613206323s" podCreationTimestamp="2025-12-08 19:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:53.58200168 +0000 UTC m=+105.874847147" watchObservedRunningTime="2025-12-08 19:30:53.613206323 +0000 UTC m=+105.906051780" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.677607 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.677657 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.677667 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.677699 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.677717 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:53Z","lastTransitionTime":"2025-12-08T19:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.782133 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.782623 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.782639 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.782657 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.782982 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:53Z","lastTransitionTime":"2025-12-08T19:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.888935 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.888984 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.888994 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.889007 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.889016 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:53Z","lastTransitionTime":"2025-12-08T19:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.990831 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.990870 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.990880 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.990894 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:53 crc kubenswrapper[5118]: I1208 19:30:53.990904 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:53Z","lastTransitionTime":"2025-12-08T19:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.092723 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.092780 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.092795 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.092816 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.092833 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:54Z","lastTransitionTime":"2025-12-08T19:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.111243 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.111316 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.111325 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.111340 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.111378 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:54Z","lastTransitionTime":"2025-12-08T19:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.150860 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7"] Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.154299 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.158367 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.158376 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.158363 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.159173 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.271208 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c5869543-d420-427e-bf4e-8aa135f9f97e-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-qkkn7\" (UID: \"c5869543-d420-427e-bf4e-8aa135f9f97e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.271260 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c5869543-d420-427e-bf4e-8aa135f9f97e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-qkkn7\" (UID: \"c5869543-d420-427e-bf4e-8aa135f9f97e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.271305 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c5869543-d420-427e-bf4e-8aa135f9f97e-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-qkkn7\" (UID: \"c5869543-d420-427e-bf4e-8aa135f9f97e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.271332 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5869543-d420-427e-bf4e-8aa135f9f97e-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-qkkn7\" (UID: \"c5869543-d420-427e-bf4e-8aa135f9f97e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.271409 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5869543-d420-427e-bf4e-8aa135f9f97e-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-qkkn7\" (UID: \"c5869543-d420-427e-bf4e-8aa135f9f97e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.372860 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5869543-d420-427e-bf4e-8aa135f9f97e-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-qkkn7\" (UID: \"c5869543-d420-427e-bf4e-8aa135f9f97e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.373104 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5869543-d420-427e-bf4e-8aa135f9f97e-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-qkkn7\" (UID: \"c5869543-d420-427e-bf4e-8aa135f9f97e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.373147 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c5869543-d420-427e-bf4e-8aa135f9f97e-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-qkkn7\" (UID: \"c5869543-d420-427e-bf4e-8aa135f9f97e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.373171 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c5869543-d420-427e-bf4e-8aa135f9f97e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-qkkn7\" (UID: \"c5869543-d420-427e-bf4e-8aa135f9f97e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.373213 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c5869543-d420-427e-bf4e-8aa135f9f97e-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-qkkn7\" (UID: \"c5869543-d420-427e-bf4e-8aa135f9f97e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.373328 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c5869543-d420-427e-bf4e-8aa135f9f97e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-qkkn7\" (UID: \"c5869543-d420-427e-bf4e-8aa135f9f97e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.373485 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c5869543-d420-427e-bf4e-8aa135f9f97e-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-qkkn7\" (UID: \"c5869543-d420-427e-bf4e-8aa135f9f97e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.374460 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c5869543-d420-427e-bf4e-8aa135f9f97e-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-qkkn7\" (UID: \"c5869543-d420-427e-bf4e-8aa135f9f97e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.381661 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5869543-d420-427e-bf4e-8aa135f9f97e-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-qkkn7\" (UID: \"c5869543-d420-427e-bf4e-8aa135f9f97e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.392546 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5869543-d420-427e-bf4e-8aa135f9f97e-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-qkkn7\" (UID: \"c5869543-d420-427e-bf4e-8aa135f9f97e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.504989 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7" Dec 08 19:30:54 crc kubenswrapper[5118]: W1208 19:30:54.520724 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5869543_d420_427e_bf4e_8aa135f9f97e.slice/crio-b8f26b71c6d4521267aec976f57fb62bf2304779fda62064d91c77f3e982b30f WatchSource:0}: Error finding container b8f26b71c6d4521267aec976f57fb62bf2304779fda62064d91c77f3e982b30f: Status 404 returned error can't find the container with id b8f26b71c6d4521267aec976f57fb62bf2304779fda62064d91c77f3e982b30f Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.579071 5118 generic.go:358] "Generic (PLEG): container finished" podID="aa21ead9-3381-422c-b52e-4a10a3ed1bd4" containerID="b9add4b2a91de7ab64d572596f72d4804d0866f5231b3b09471eb477a750c2bd" exitCode=0 Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.579354 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" event={"ID":"aa21ead9-3381-422c-b52e-4a10a3ed1bd4","Type":"ContainerDied","Data":"b9add4b2a91de7ab64d572596f72d4804d0866f5231b3b09471eb477a750c2bd"} Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.587333 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" event={"ID":"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6","Type":"ContainerStarted","Data":"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d"} Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.587378 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" event={"ID":"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6","Type":"ContainerStarted","Data":"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e"} Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.587387 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" event={"ID":"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6","Type":"ContainerStarted","Data":"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1"} Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.587396 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" event={"ID":"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6","Type":"ContainerStarted","Data":"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c"} Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.587405 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" event={"ID":"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6","Type":"ContainerStarted","Data":"16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5"} Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.587413 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" event={"ID":"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6","Type":"ContainerStarted","Data":"308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198"} Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.588644 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-j4b8g" event={"ID":"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742","Type":"ContainerStarted","Data":"d4e35812f048f9b4a1f8a2dfc7e60eb1a2d7df2bce39455c9e8ba7657e3b9fb8"} Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.589937 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7" event={"ID":"c5869543-d420-427e-bf4e-8aa135f9f97e","Type":"ContainerStarted","Data":"b8f26b71c6d4521267aec976f57fb62bf2304779fda62064d91c77f3e982b30f"} Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.591438 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7g24j" event={"ID":"688024c3-8b6c-450e-a7b2-3b3165438f4b","Type":"ContainerStarted","Data":"0cd218050c631b6195be494d2041e14226561305888208a78b4a33bfa10b3688"} Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.648893 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7g24j" podStartSLOduration=86.648873529 podStartE2EDuration="1m26.648873529s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:54.648061888 +0000 UTC m=+106.940907345" watchObservedRunningTime="2025-12-08 19:30:54.648873529 +0000 UTC m=+106.941718986" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.649054 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-j4b8g" podStartSLOduration=86.649050724 podStartE2EDuration="1m26.649050724s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:54.624048681 +0000 UTC m=+106.916894148" watchObservedRunningTime="2025-12-08 19:30:54.649050724 +0000 UTC m=+106.941896181" Dec 08 19:30:54 crc kubenswrapper[5118]: E1208 19:30:54.880808 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.880775044 +0000 UTC m=+123.173620501 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.880626 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.881010 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.881041 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.881188 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:54 crc kubenswrapper[5118]: E1208 19:30:54.881133 5118 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:54 crc kubenswrapper[5118]: E1208 19:30:54.881222 5118 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:54 crc kubenswrapper[5118]: E1208 19:30:54.881283 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.881274207 +0000 UTC m=+123.174119664 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:54 crc kubenswrapper[5118]: E1208 19:30:54.881342 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.881318908 +0000 UTC m=+123.174164375 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:54 crc kubenswrapper[5118]: E1208 19:30:54.881395 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:54 crc kubenswrapper[5118]: E1208 19:30:54.881409 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:54 crc kubenswrapper[5118]: E1208 19:30:54.881421 5118 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.881419 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:54 crc kubenswrapper[5118]: E1208 19:30:54.881471 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.881463302 +0000 UTC m=+123.174308759 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:54 crc kubenswrapper[5118]: E1208 19:30:54.881650 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:54 crc kubenswrapper[5118]: E1208 19:30:54.881740 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:54 crc kubenswrapper[5118]: E1208 19:30:54.881761 5118 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:54 crc kubenswrapper[5118]: E1208 19:30:54.881862 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.881836553 +0000 UTC m=+123.174682020 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:54 crc kubenswrapper[5118]: I1208 19:30:54.983024 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs\") pod \"network-metrics-daemon-qmvkf\" (UID: \"b9693139-63f6-471e-ae19-744460a6b114\") " pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:54 crc kubenswrapper[5118]: E1208 19:30:54.983326 5118 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:54 crc kubenswrapper[5118]: E1208 19:30:54.983493 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs podName:b9693139-63f6-471e-ae19-744460a6b114 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.983453778 +0000 UTC m=+123.276299375 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs") pod "network-metrics-daemon-qmvkf" (UID: "b9693139-63f6-471e-ae19-744460a6b114") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:55 crc kubenswrapper[5118]: I1208 19:30:55.078740 5118 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 08 19:30:55 crc kubenswrapper[5118]: I1208 19:30:55.089749 5118 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 19:30:55 crc kubenswrapper[5118]: I1208 19:30:55.096540 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:55 crc kubenswrapper[5118]: I1208 19:30:55.096614 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:55 crc kubenswrapper[5118]: E1208 19:30:55.096775 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qmvkf" podUID="b9693139-63f6-471e-ae19-744460a6b114" Dec 08 19:30:55 crc kubenswrapper[5118]: E1208 19:30:55.096858 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:55 crc kubenswrapper[5118]: I1208 19:30:55.097132 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:55 crc kubenswrapper[5118]: I1208 19:30:55.097195 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:55 crc kubenswrapper[5118]: E1208 19:30:55.097353 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:55 crc kubenswrapper[5118]: E1208 19:30:55.097430 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:55 crc kubenswrapper[5118]: I1208 19:30:55.596422 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" event={"ID":"0052f7cb-2eab-42e7-8f98-b1544811d9c3","Type":"ContainerStarted","Data":"6b75a6df46289cfb7fc645004032eecc980b25683bb45f1419b163f43cdb8ac3"} Dec 08 19:30:55 crc kubenswrapper[5118]: I1208 19:30:55.596960 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" event={"ID":"0052f7cb-2eab-42e7-8f98-b1544811d9c3","Type":"ContainerStarted","Data":"92647ff13fb1d82844fdc1c78fadbe5a9f72de51c235d82acb429790753aa73b"} Dec 08 19:30:55 crc kubenswrapper[5118]: I1208 19:30:55.599185 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7" event={"ID":"c5869543-d420-427e-bf4e-8aa135f9f97e","Type":"ContainerStarted","Data":"32df6942c5173564e7acc14282f0e671ef5dd68abc60474656a9355ea99c6708"} Dec 08 19:30:55 crc kubenswrapper[5118]: I1208 19:30:55.602234 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" event={"ID":"aa21ead9-3381-422c-b52e-4a10a3ed1bd4","Type":"ContainerStarted","Data":"3ea5af7f31468d2d2508477f1aaab80b64d3a0816d8985f70a34b8195463417a"} Dec 08 19:30:55 crc kubenswrapper[5118]: I1208 19:30:55.616336 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podStartSLOduration=87.616313383 podStartE2EDuration="1m27.616313383s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:55.61473123 +0000 UTC m=+107.907576707" watchObservedRunningTime="2025-12-08 19:30:55.616313383 +0000 UTC m=+107.909158850" Dec 08 19:30:55 crc kubenswrapper[5118]: I1208 19:30:55.649253 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qkkn7" podStartSLOduration=87.649236343 podStartE2EDuration="1m27.649236343s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:55.647987168 +0000 UTC m=+107.940832625" watchObservedRunningTime="2025-12-08 19:30:55.649236343 +0000 UTC m=+107.942081800" Dec 08 19:30:56 crc kubenswrapper[5118]: I1208 19:30:56.614547 5118 generic.go:358] "Generic (PLEG): container finished" podID="aa21ead9-3381-422c-b52e-4a10a3ed1bd4" containerID="3ea5af7f31468d2d2508477f1aaab80b64d3a0816d8985f70a34b8195463417a" exitCode=0 Dec 08 19:30:56 crc kubenswrapper[5118]: I1208 19:30:56.614631 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" event={"ID":"aa21ead9-3381-422c-b52e-4a10a3ed1bd4","Type":"ContainerDied","Data":"3ea5af7f31468d2d2508477f1aaab80b64d3a0816d8985f70a34b8195463417a"} Dec 08 19:30:57 crc kubenswrapper[5118]: I1208 19:30:57.102089 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:57 crc kubenswrapper[5118]: I1208 19:30:57.102119 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:57 crc kubenswrapper[5118]: E1208 19:30:57.102788 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qmvkf" podUID="b9693139-63f6-471e-ae19-744460a6b114" Dec 08 19:30:57 crc kubenswrapper[5118]: I1208 19:30:57.102308 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:57 crc kubenswrapper[5118]: I1208 19:30:57.102188 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:57 crc kubenswrapper[5118]: E1208 19:30:57.103703 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:57 crc kubenswrapper[5118]: E1208 19:30:57.103799 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:57 crc kubenswrapper[5118]: E1208 19:30:57.103857 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:57 crc kubenswrapper[5118]: I1208 19:30:57.622661 5118 generic.go:358] "Generic (PLEG): container finished" podID="aa21ead9-3381-422c-b52e-4a10a3ed1bd4" containerID="824fba8a7f4ca2d0ae5b8b20beef620c817e75ba9f2cb26ccb9511aa68d6ba49" exitCode=0 Dec 08 19:30:57 crc kubenswrapper[5118]: I1208 19:30:57.622827 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" event={"ID":"aa21ead9-3381-422c-b52e-4a10a3ed1bd4","Type":"ContainerDied","Data":"824fba8a7f4ca2d0ae5b8b20beef620c817e75ba9f2cb26ccb9511aa68d6ba49"} Dec 08 19:30:57 crc kubenswrapper[5118]: I1208 19:30:57.632569 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" event={"ID":"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6","Type":"ContainerStarted","Data":"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a"} Dec 08 19:30:58 crc kubenswrapper[5118]: I1208 19:30:58.642769 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" event={"ID":"aa21ead9-3381-422c-b52e-4a10a3ed1bd4","Type":"ContainerStarted","Data":"27d96afb51f90b18dec323732da5d13ad38fa9f98ff658f13009ca6afdde9cd6"} Dec 08 19:30:58 crc kubenswrapper[5118]: I1208 19:30:58.644941 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"128ea8415b93d70eff1e579afb8c7eaa5a8e3c4269af7acf4bcc7208a9437eb2"} Dec 08 19:30:58 crc kubenswrapper[5118]: I1208 19:30:58.672749 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-xg8tn" podStartSLOduration=90.672731734 podStartE2EDuration="1m30.672731734s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:58.670251456 +0000 UTC m=+110.963096933" watchObservedRunningTime="2025-12-08 19:30:58.672731734 +0000 UTC m=+110.965577211" Dec 08 19:30:59 crc kubenswrapper[5118]: I1208 19:30:59.095915 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:30:59 crc kubenswrapper[5118]: I1208 19:30:59.095962 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:59 crc kubenswrapper[5118]: I1208 19:30:59.095985 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:59 crc kubenswrapper[5118]: I1208 19:30:59.096108 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:59 crc kubenswrapper[5118]: E1208 19:30:59.096493 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qmvkf" podUID="b9693139-63f6-471e-ae19-744460a6b114" Dec 08 19:30:59 crc kubenswrapper[5118]: E1208 19:30:59.096847 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:59 crc kubenswrapper[5118]: E1208 19:30:59.096972 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:59 crc kubenswrapper[5118]: E1208 19:30:59.097018 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:59 crc kubenswrapper[5118]: I1208 19:30:59.651371 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" event={"ID":"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6","Type":"ContainerStarted","Data":"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be"} Dec 08 19:30:59 crc kubenswrapper[5118]: I1208 19:30:59.651986 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:59 crc kubenswrapper[5118]: I1208 19:30:59.652101 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:59 crc kubenswrapper[5118]: I1208 19:30:59.683073 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:30:59 crc kubenswrapper[5118]: I1208 19:30:59.718043 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" podStartSLOduration=91.718012389 podStartE2EDuration="1m31.718012389s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:59.684250929 +0000 UTC m=+111.977096396" watchObservedRunningTime="2025-12-08 19:30:59.718012389 +0000 UTC m=+112.010857846" Dec 08 19:31:00 crc kubenswrapper[5118]: I1208 19:31:00.656067 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:31:00 crc kubenswrapper[5118]: I1208 19:31:00.690158 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:31:01 crc kubenswrapper[5118]: I1208 19:31:01.096492 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:31:01 crc kubenswrapper[5118]: E1208 19:31:01.096903 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qmvkf" podUID="b9693139-63f6-471e-ae19-744460a6b114" Dec 08 19:31:01 crc kubenswrapper[5118]: I1208 19:31:01.096543 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:31:01 crc kubenswrapper[5118]: E1208 19:31:01.096984 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:31:01 crc kubenswrapper[5118]: I1208 19:31:01.096647 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:31:01 crc kubenswrapper[5118]: E1208 19:31:01.097061 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:31:01 crc kubenswrapper[5118]: I1208 19:31:01.096516 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:31:01 crc kubenswrapper[5118]: E1208 19:31:01.097126 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:31:01 crc kubenswrapper[5118]: I1208 19:31:01.343941 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qmvkf"] Dec 08 19:31:01 crc kubenswrapper[5118]: I1208 19:31:01.659290 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:31:01 crc kubenswrapper[5118]: E1208 19:31:01.659479 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qmvkf" podUID="b9693139-63f6-471e-ae19-744460a6b114" Dec 08 19:31:03 crc kubenswrapper[5118]: I1208 19:31:03.095935 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:31:03 crc kubenswrapper[5118]: I1208 19:31:03.095998 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:31:03 crc kubenswrapper[5118]: E1208 19:31:03.096094 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:31:03 crc kubenswrapper[5118]: I1208 19:31:03.095997 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:31:03 crc kubenswrapper[5118]: E1208 19:31:03.096197 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:31:03 crc kubenswrapper[5118]: E1208 19:31:03.096218 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:31:03 crc kubenswrapper[5118]: I1208 19:31:03.096270 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:31:03 crc kubenswrapper[5118]: E1208 19:31:03.096339 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qmvkf" podUID="b9693139-63f6-471e-ae19-744460a6b114" Dec 08 19:31:04 crc kubenswrapper[5118]: I1208 19:31:04.595720 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:31:05 crc kubenswrapper[5118]: I1208 19:31:05.096590 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:31:05 crc kubenswrapper[5118]: I1208 19:31:05.096709 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:31:05 crc kubenswrapper[5118]: E1208 19:31:05.097113 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qmvkf" podUID="b9693139-63f6-471e-ae19-744460a6b114" Dec 08 19:31:05 crc kubenswrapper[5118]: E1208 19:31:05.097168 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:31:05 crc kubenswrapper[5118]: I1208 19:31:05.096763 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:31:05 crc kubenswrapper[5118]: I1208 19:31:05.096747 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:31:05 crc kubenswrapper[5118]: E1208 19:31:05.097266 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:31:05 crc kubenswrapper[5118]: E1208 19:31:05.097461 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:31:06 crc kubenswrapper[5118]: I1208 19:31:06.593957 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 08 19:31:06 crc kubenswrapper[5118]: I1208 19:31:06.594243 5118 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Dec 08 19:31:06 crc kubenswrapper[5118]: I1208 19:31:06.636547 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-8vsfg"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.417533 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-vjsnr"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.417626 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.417804 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.417931 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.418024 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.418511 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.422329 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.422380 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.422799 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.422913 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.423174 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.423230 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.423250 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.423370 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.423453 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.423754 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.423949 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.424030 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.424084 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.424521 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.424541 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.424941 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.434466 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.437314 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.438831 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.439503 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-vjsnr" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.441175 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.441284 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.441355 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.447570 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.447350 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-fz5jn"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.457024 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.463796 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.463869 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.463898 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.463839 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.464055 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.464138 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.464244 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.464736 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.464915 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.465005 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.465108 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.465190 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.466104 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.466284 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.466414 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.466529 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.466536 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.468020 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.468196 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.468321 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.468434 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.468548 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.468761 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.468815 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.470000 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.470288 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.473287 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-kk4vd"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.473400 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-fz5jn" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.476015 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.476107 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.476757 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.476796 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.476861 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.488190 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-hxwm8"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.488449 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.492414 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.493455 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.493458 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.493534 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.493675 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.495507 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.495770 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6xz4z"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.495955 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.498162 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.498409 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.498530 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.499060 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.500023 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.500120 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.500356 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.500483 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.503321 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-vjsnr"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.503534 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6xz4z" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.505656 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.506792 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.508021 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.509774 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.509815 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.509954 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.510180 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.516309 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-b68tb"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.516519 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.520391 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.521625 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.521670 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.521801 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.524048 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.525509 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-47lhr"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.525637 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.529229 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.529935 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7c859cf-4198-4549-b24d-d5cc7e650257-trusted-ca-bundle\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.530028 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04382913-99f0-4bca-abaa-952bbb21e06a-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-bjxx7\" (UID: \"04382913-99f0-4bca-abaa-952bbb21e06a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.530089 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk6xl\" (UniqueName: \"kubernetes.io/projected/04382913-99f0-4bca-abaa-952bbb21e06a-kube-api-access-lk6xl\") pod \"authentication-operator-7f5c659b84-bjxx7\" (UID: \"04382913-99f0-4bca-abaa-952bbb21e06a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.530155 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d5ad6856-ba98-4f91-b102-7e41020e2ecf-tmp\") pod \"route-controller-manager-776cdc94d6-575sv\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.530240 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9556f84b-c3ef-4dd1-8483-67e5960385a1-audit-dir\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.530272 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04382913-99f0-4bca-abaa-952bbb21e06a-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-bjxx7\" (UID: \"04382913-99f0-4bca-abaa-952bbb21e06a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.530302 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9556f84b-c3ef-4dd1-8483-67e5960385a1-etcd-client\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.530327 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1927987d-1fa4-4b00-b6f0-a7861eb10702-config\") pod \"machine-api-operator-755bb95488-vjsnr\" (UID: \"1927987d-1fa4-4b00-b6f0-a7861eb10702\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vjsnr" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.530345 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9556f84b-c3ef-4dd1-8483-67e5960385a1-config\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.530364 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9556f84b-c3ef-4dd1-8483-67e5960385a1-serving-cert\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.530385 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5ad6856-ba98-4f91-b102-7e41020e2ecf-client-ca\") pod \"route-controller-manager-776cdc94d6-575sv\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.530408 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78g6q\" (UniqueName: \"kubernetes.io/projected/d5ad6856-ba98-4f91-b102-7e41020e2ecf-kube-api-access-78g6q\") pod \"route-controller-manager-776cdc94d6-575sv\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.531056 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9556f84b-c3ef-4dd1-8483-67e5960385a1-node-pullsecrets\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.531098 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9556f84b-c3ef-4dd1-8483-67e5960385a1-encryption-config\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.531501 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-tkctz"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.531558 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f7c859cf-4198-4549-b24d-d5cc7e650257-etcd-serving-ca\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.531586 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ad6856-ba98-4f91-b102-7e41020e2ecf-serving-cert\") pod \"route-controller-manager-776cdc94d6-575sv\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.531639 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04382913-99f0-4bca-abaa-952bbb21e06a-serving-cert\") pod \"authentication-operator-7f5c659b84-bjxx7\" (UID: \"04382913-99f0-4bca-abaa-952bbb21e06a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.531668 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1927987d-1fa4-4b00-b6f0-a7861eb10702-images\") pod \"machine-api-operator-755bb95488-vjsnr\" (UID: \"1927987d-1fa4-4b00-b6f0-a7861eb10702\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vjsnr" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.531711 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04382913-99f0-4bca-abaa-952bbb21e06a-config\") pod \"authentication-operator-7f5c659b84-bjxx7\" (UID: \"04382913-99f0-4bca-abaa-952bbb21e06a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.531735 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f7c859cf-4198-4549-b24d-d5cc7e650257-etcd-client\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.531757 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9556f84b-c3ef-4dd1-8483-67e5960385a1-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.531783 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd4vw\" (UniqueName: \"kubernetes.io/projected/9556f84b-c3ef-4dd1-8483-67e5960385a1-kube-api-access-nd4vw\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.531807 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f7c859cf-4198-4549-b24d-d5cc7e650257-audit-policies\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.531835 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7zsw\" (UniqueName: \"kubernetes.io/projected/f7c859cf-4198-4549-b24d-d5cc7e650257-kube-api-access-w7zsw\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.531857 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ad6856-ba98-4f91-b102-7e41020e2ecf-config\") pod \"route-controller-manager-776cdc94d6-575sv\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.531882 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7c859cf-4198-4549-b24d-d5cc7e650257-serving-cert\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.531906 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f7c859cf-4198-4549-b24d-d5cc7e650257-audit-dir\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.531938 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9556f84b-c3ef-4dd1-8483-67e5960385a1-image-import-ca\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.531965 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9556f84b-c3ef-4dd1-8483-67e5960385a1-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.532004 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1927987d-1fa4-4b00-b6f0-a7861eb10702-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-vjsnr\" (UID: \"1927987d-1fa4-4b00-b6f0-a7861eb10702\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vjsnr" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.532036 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9556f84b-c3ef-4dd1-8483-67e5960385a1-audit\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.532059 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxs6m\" (UniqueName: \"kubernetes.io/projected/1927987d-1fa4-4b00-b6f0-a7861eb10702-kube-api-access-qxs6m\") pod \"machine-api-operator-755bb95488-vjsnr\" (UID: \"1927987d-1fa4-4b00-b6f0-a7861eb10702\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vjsnr" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.532864 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-47lhr" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.532088 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f7c859cf-4198-4549-b24d-d5cc7e650257-encryption-config\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.535189 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.539451 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.539885 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.540059 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.540218 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.540341 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.540578 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.540627 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.540782 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.541970 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.541370 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.541968 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.542643 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.542714 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.542792 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.542807 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.543080 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.543337 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.549676 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-qnl9q"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.549879 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-tkctz" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.552111 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.553284 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-k49rf"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.553489 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-qnl9q" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.554080 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.559785 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zmlzt"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.559970 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.570783 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-x84b4"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.570960 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zmlzt" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.575073 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.575283 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-x84b4" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.580355 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kr8np"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.580534 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.585338 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.585480 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kr8np" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.588726 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-hxwm8"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.588921 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-sljmn"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.588835 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.591385 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.600622 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-vzpzx"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.600833 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-sljmn" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.601422 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.614335 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.614373 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.614618 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.618046 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.618273 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.620025 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.630677 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wws6k"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.630901 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.635440 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9556f84b-c3ef-4dd1-8483-67e5960385a1-audit\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.635469 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qxs6m\" (UniqueName: \"kubernetes.io/projected/1927987d-1fa4-4b00-b6f0-a7861eb10702-kube-api-access-qxs6m\") pod \"machine-api-operator-755bb95488-vjsnr\" (UID: \"1927987d-1fa4-4b00-b6f0-a7861eb10702\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vjsnr" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.635494 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e65a45b2-4747-4f30-bbfa-d8a711e702e8-trusted-ca\") pod \"console-operator-67c89758df-tkctz\" (UID: \"e65a45b2-4747-4f30-bbfa-d8a711e702e8\") " pod="openshift-console-operator/console-operator-67c89758df-tkctz" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.635510 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.635542 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f7c859cf-4198-4549-b24d-d5cc7e650257-encryption-config\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.635653 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/860d245e-aede-47bd-a8fe-b8bd2f79fd86-auth-proxy-config\") pod \"machine-approver-54c688565-zn9cs\" (UID: \"860d245e-aede-47bd-a8fe-b8bd2f79fd86\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.635679 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.635988 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636098 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7c859cf-4198-4549-b24d-d5cc7e650257-trusted-ca-bundle\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636141 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/860d245e-aede-47bd-a8fe-b8bd2f79fd86-machine-approver-tls\") pod \"machine-approver-54c688565-zn9cs\" (UID: \"860d245e-aede-47bd-a8fe-b8bd2f79fd86\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636174 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636202 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04382913-99f0-4bca-abaa-952bbb21e06a-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-bjxx7\" (UID: \"04382913-99f0-4bca-abaa-952bbb21e06a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636229 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lk6xl\" (UniqueName: \"kubernetes.io/projected/04382913-99f0-4bca-abaa-952bbb21e06a-kube-api-access-lk6xl\") pod \"authentication-operator-7f5c659b84-bjxx7\" (UID: \"04382913-99f0-4bca-abaa-952bbb21e06a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636258 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636301 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d5ad6856-ba98-4f91-b102-7e41020e2ecf-tmp\") pod \"route-controller-manager-776cdc94d6-575sv\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636353 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9556f84b-c3ef-4dd1-8483-67e5960385a1-audit-dir\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636380 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04382913-99f0-4bca-abaa-952bbb21e06a-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-bjxx7\" (UID: \"04382913-99f0-4bca-abaa-952bbb21e06a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636405 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e65a45b2-4747-4f30-bbfa-d8a711e702e8-serving-cert\") pod \"console-operator-67c89758df-tkctz\" (UID: \"e65a45b2-4747-4f30-bbfa-d8a711e702e8\") " pod="openshift-console-operator/console-operator-67c89758df-tkctz" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636409 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9556f84b-c3ef-4dd1-8483-67e5960385a1-audit-dir\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636417 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9556f84b-c3ef-4dd1-8483-67e5960385a1-audit\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636444 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9556f84b-c3ef-4dd1-8483-67e5960385a1-etcd-client\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636480 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1927987d-1fa4-4b00-b6f0-a7861eb10702-config\") pod \"machine-api-operator-755bb95488-vjsnr\" (UID: \"1927987d-1fa4-4b00-b6f0-a7861eb10702\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vjsnr" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636507 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9556f84b-c3ef-4dd1-8483-67e5960385a1-config\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636531 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9556f84b-c3ef-4dd1-8483-67e5960385a1-serving-cert\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636557 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5ad6856-ba98-4f91-b102-7e41020e2ecf-client-ca\") pod \"route-controller-manager-776cdc94d6-575sv\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636587 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-78g6q\" (UniqueName: \"kubernetes.io/projected/d5ad6856-ba98-4f91-b102-7e41020e2ecf-kube-api-access-78g6q\") pod \"route-controller-manager-776cdc94d6-575sv\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636646 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9556f84b-c3ef-4dd1-8483-67e5960385a1-node-pullsecrets\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636673 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9556f84b-c3ef-4dd1-8483-67e5960385a1-encryption-config\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636724 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f7c859cf-4198-4549-b24d-d5cc7e650257-etcd-serving-ca\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636747 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ad6856-ba98-4f91-b102-7e41020e2ecf-serving-cert\") pod \"route-controller-manager-776cdc94d6-575sv\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636750 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7c859cf-4198-4549-b24d-d5cc7e650257-trusted-ca-bundle\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636774 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636811 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04382913-99f0-4bca-abaa-952bbb21e06a-serving-cert\") pod \"authentication-operator-7f5c659b84-bjxx7\" (UID: \"04382913-99f0-4bca-abaa-952bbb21e06a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636822 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9556f84b-c3ef-4dd1-8483-67e5960385a1-node-pullsecrets\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636843 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzx79\" (UniqueName: \"kubernetes.io/projected/00a48e62-fdf7-4d8f-846f-295c3cb4489e-kube-api-access-tzx79\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636884 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e65a45b2-4747-4f30-bbfa-d8a711e702e8-config\") pod \"console-operator-67c89758df-tkctz\" (UID: \"e65a45b2-4747-4f30-bbfa-d8a711e702e8\") " pod="openshift-console-operator/console-operator-67c89758df-tkctz" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636913 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636941 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1927987d-1fa4-4b00-b6f0-a7861eb10702-images\") pod \"machine-api-operator-755bb95488-vjsnr\" (UID: \"1927987d-1fa4-4b00-b6f0-a7861eb10702\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vjsnr" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636963 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04382913-99f0-4bca-abaa-952bbb21e06a-config\") pod \"authentication-operator-7f5c659b84-bjxx7\" (UID: \"04382913-99f0-4bca-abaa-952bbb21e06a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.636987 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f7c859cf-4198-4549-b24d-d5cc7e650257-etcd-client\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.637051 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.637090 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9556f84b-c3ef-4dd1-8483-67e5960385a1-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.637118 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nd4vw\" (UniqueName: \"kubernetes.io/projected/9556f84b-c3ef-4dd1-8483-67e5960385a1-kube-api-access-nd4vw\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.637140 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f7c859cf-4198-4549-b24d-d5cc7e650257-audit-policies\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.637166 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57h7j\" (UniqueName: \"kubernetes.io/projected/86f2d26a-630b-4a98-9dc3-c1ec245d7b6b-kube-api-access-57h7j\") pod \"downloads-747b44746d-qnl9q\" (UID: \"86f2d26a-630b-4a98-9dc3-c1ec245d7b6b\") " pod="openshift-console/downloads-747b44746d-qnl9q" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.637236 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w7zsw\" (UniqueName: \"kubernetes.io/projected/f7c859cf-4198-4549-b24d-d5cc7e650257-kube-api-access-w7zsw\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.637282 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ad6856-ba98-4f91-b102-7e41020e2ecf-config\") pod \"route-controller-manager-776cdc94d6-575sv\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.637311 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/860d245e-aede-47bd-a8fe-b8bd2f79fd86-config\") pod \"machine-approver-54c688565-zn9cs\" (UID: \"860d245e-aede-47bd-a8fe-b8bd2f79fd86\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.637336 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zff75\" (UniqueName: \"kubernetes.io/projected/860d245e-aede-47bd-a8fe-b8bd2f79fd86-kube-api-access-zff75\") pod \"machine-approver-54c688565-zn9cs\" (UID: \"860d245e-aede-47bd-a8fe-b8bd2f79fd86\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.637358 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9556f84b-c3ef-4dd1-8483-67e5960385a1-config\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.637372 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.637369 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d5ad6856-ba98-4f91-b102-7e41020e2ecf-tmp\") pod \"route-controller-manager-776cdc94d6-575sv\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.637410 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04382913-99f0-4bca-abaa-952bbb21e06a-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-bjxx7\" (UID: \"04382913-99f0-4bca-abaa-952bbb21e06a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.637444 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7c859cf-4198-4549-b24d-d5cc7e650257-serving-cert\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.637523 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1927987d-1fa4-4b00-b6f0-a7861eb10702-config\") pod \"machine-api-operator-755bb95488-vjsnr\" (UID: \"1927987d-1fa4-4b00-b6f0-a7861eb10702\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vjsnr" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.638094 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f7c859cf-4198-4549-b24d-d5cc7e650257-etcd-serving-ca\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.638251 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1927987d-1fa4-4b00-b6f0-a7861eb10702-images\") pod \"machine-api-operator-755bb95488-vjsnr\" (UID: \"1927987d-1fa4-4b00-b6f0-a7861eb10702\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vjsnr" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.638579 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04382913-99f0-4bca-abaa-952bbb21e06a-config\") pod \"authentication-operator-7f5c659b84-bjxx7\" (UID: \"04382913-99f0-4bca-abaa-952bbb21e06a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.638629 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vd57\" (UniqueName: \"kubernetes.io/projected/e65a45b2-4747-4f30-bbfa-d8a711e702e8-kube-api-access-8vd57\") pod \"console-operator-67c89758df-tkctz\" (UID: \"e65a45b2-4747-4f30-bbfa-d8a711e702e8\") " pod="openshift-console-operator/console-operator-67c89758df-tkctz" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.638941 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9556f84b-c3ef-4dd1-8483-67e5960385a1-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.639223 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04382913-99f0-4bca-abaa-952bbb21e06a-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-bjxx7\" (UID: \"04382913-99f0-4bca-abaa-952bbb21e06a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.639354 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5ad6856-ba98-4f91-b102-7e41020e2ecf-client-ca\") pod \"route-controller-manager-776cdc94d6-575sv\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.639508 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f7c859cf-4198-4549-b24d-d5cc7e650257-audit-policies\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.639840 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f7c859cf-4198-4549-b24d-d5cc7e650257-audit-dir\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.639970 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/00a48e62-fdf7-4d8f-846f-295c3cb4489e-audit-dir\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.640010 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.640097 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f7c859cf-4198-4549-b24d-d5cc7e650257-audit-dir\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.640134 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9556f84b-c3ef-4dd1-8483-67e5960385a1-image-import-ca\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.640172 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9556f84b-c3ef-4dd1-8483-67e5960385a1-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.640230 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-audit-policies\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.640343 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.640475 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1927987d-1fa4-4b00-b6f0-a7861eb10702-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-vjsnr\" (UID: \"1927987d-1fa4-4b00-b6f0-a7861eb10702\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vjsnr" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.641451 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9556f84b-c3ef-4dd1-8483-67e5960385a1-image-import-ca\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.641451 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qvvjj"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.641524 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9556f84b-c3ef-4dd1-8483-67e5960385a1-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.641596 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wws6k" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.642256 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ad6856-ba98-4f91-b102-7e41020e2ecf-config\") pod \"route-controller-manager-776cdc94d6-575sv\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.642503 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.643744 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f7c859cf-4198-4549-b24d-d5cc7e650257-encryption-config\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.644010 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9556f84b-c3ef-4dd1-8483-67e5960385a1-etcd-client\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.644371 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ad6856-ba98-4f91-b102-7e41020e2ecf-serving-cert\") pod \"route-controller-manager-776cdc94d6-575sv\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.645077 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04382913-99f0-4bca-abaa-952bbb21e06a-serving-cert\") pod \"authentication-operator-7f5c659b84-bjxx7\" (UID: \"04382913-99f0-4bca-abaa-952bbb21e06a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.645105 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9556f84b-c3ef-4dd1-8483-67e5960385a1-serving-cert\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.646195 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9556f84b-c3ef-4dd1-8483-67e5960385a1-encryption-config\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.646443 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7c859cf-4198-4549-b24d-d5cc7e650257-serving-cert\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.648041 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f7c859cf-4198-4549-b24d-d5cc7e650257-etcd-client\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.648282 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1927987d-1fa4-4b00-b6f0-a7861eb10702-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-vjsnr\" (UID: \"1927987d-1fa4-4b00-b6f0-a7861eb10702\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vjsnr" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.660900 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.675634 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nhcw5"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.675861 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qvvjj" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.679957 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.685588 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-r5dqp"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.685717 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nhcw5" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.699387 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.699427 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-fz5jn"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.699448 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.699861 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-r5dqp" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.700322 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.704534 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-8vsfg"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.704560 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-8wzbs"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.704609 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.720344 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.720825 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qbjbk"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.721051 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-8wzbs" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.734481 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.734641 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qbjbk" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.739785 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.741037 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/00a48e62-fdf7-4d8f-846f-295c3cb4489e-audit-dir\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.741072 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.741093 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-audit-policies\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.741111 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.741187 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/00a48e62-fdf7-4d8f-846f-295c3cb4489e-audit-dir\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.741266 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e65a45b2-4747-4f30-bbfa-d8a711e702e8-trusted-ca\") pod \"console-operator-67c89758df-tkctz\" (UID: \"e65a45b2-4747-4f30-bbfa-d8a711e702e8\") " pod="openshift-console-operator/console-operator-67c89758df-tkctz" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.741290 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.741328 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/860d245e-aede-47bd-a8fe-b8bd2f79fd86-auth-proxy-config\") pod \"machine-approver-54c688565-zn9cs\" (UID: \"860d245e-aede-47bd-a8fe-b8bd2f79fd86\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.741345 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.741369 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.741602 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/860d245e-aede-47bd-a8fe-b8bd2f79fd86-machine-approver-tls\") pod \"machine-approver-54c688565-zn9cs\" (UID: \"860d245e-aede-47bd-a8fe-b8bd2f79fd86\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.741676 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.742295 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-audit-policies\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.742645 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/860d245e-aede-47bd-a8fe-b8bd2f79fd86-auth-proxy-config\") pod \"machine-approver-54c688565-zn9cs\" (UID: \"860d245e-aede-47bd-a8fe-b8bd2f79fd86\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.742169 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.742889 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.742973 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e65a45b2-4747-4f30-bbfa-d8a711e702e8-serving-cert\") pod \"console-operator-67c89758df-tkctz\" (UID: \"e65a45b2-4747-4f30-bbfa-d8a711e702e8\") " pod="openshift-console-operator/console-operator-67c89758df-tkctz" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.743222 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.743271 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.743312 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tzx79\" (UniqueName: \"kubernetes.io/projected/00a48e62-fdf7-4d8f-846f-295c3cb4489e-kube-api-access-tzx79\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.743350 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e65a45b2-4747-4f30-bbfa-d8a711e702e8-config\") pod \"console-operator-67c89758df-tkctz\" (UID: \"e65a45b2-4747-4f30-bbfa-d8a711e702e8\") " pod="openshift-console-operator/console-operator-67c89758df-tkctz" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.743392 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.743432 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.743523 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-57h7j\" (UniqueName: \"kubernetes.io/projected/86f2d26a-630b-4a98-9dc3-c1ec245d7b6b-kube-api-access-57h7j\") pod \"downloads-747b44746d-qnl9q\" (UID: \"86f2d26a-630b-4a98-9dc3-c1ec245d7b6b\") " pod="openshift-console/downloads-747b44746d-qnl9q" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.743575 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/860d245e-aede-47bd-a8fe-b8bd2f79fd86-config\") pod \"machine-approver-54c688565-zn9cs\" (UID: \"860d245e-aede-47bd-a8fe-b8bd2f79fd86\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.743605 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zff75\" (UniqueName: \"kubernetes.io/projected/860d245e-aede-47bd-a8fe-b8bd2f79fd86-kube-api-access-zff75\") pod \"machine-approver-54c688565-zn9cs\" (UID: \"860d245e-aede-47bd-a8fe-b8bd2f79fd86\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.743637 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.743744 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8vd57\" (UniqueName: \"kubernetes.io/projected/e65a45b2-4747-4f30-bbfa-d8a711e702e8-kube-api-access-8vd57\") pod \"console-operator-67c89758df-tkctz\" (UID: \"e65a45b2-4747-4f30-bbfa-d8a711e702e8\") " pod="openshift-console-operator/console-operator-67c89758df-tkctz" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.745072 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/860d245e-aede-47bd-a8fe-b8bd2f79fd86-config\") pod \"machine-approver-54c688565-zn9cs\" (UID: \"860d245e-aede-47bd-a8fe-b8bd2f79fd86\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.745574 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.746083 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e65a45b2-4747-4f30-bbfa-d8a711e702e8-config\") pod \"console-operator-67c89758df-tkctz\" (UID: \"e65a45b2-4747-4f30-bbfa-d8a711e702e8\") " pod="openshift-console-operator/console-operator-67c89758df-tkctz" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.746294 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pkb44"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.746477 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.746581 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e65a45b2-4747-4f30-bbfa-d8a711e702e8-trusted-ca\") pod \"console-operator-67c89758df-tkctz\" (UID: \"e65a45b2-4747-4f30-bbfa-d8a711e702e8\") " pod="openshift-console-operator/console-operator-67c89758df-tkctz" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.746840 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.747337 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/860d245e-aede-47bd-a8fe-b8bd2f79fd86-machine-approver-tls\") pod \"machine-approver-54c688565-zn9cs\" (UID: \"860d245e-aede-47bd-a8fe-b8bd2f79fd86\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.747798 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.748088 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.748137 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.748136 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.749230 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.750143 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.750303 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.750956 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e65a45b2-4747-4f30-bbfa-d8a711e702e8-serving-cert\") pod \"console-operator-67c89758df-tkctz\" (UID: \"e65a45b2-4747-4f30-bbfa-d8a711e702e8\") " pod="openshift-console-operator/console-operator-67c89758df-tkctz" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.761510 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.765187 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-8htc9"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.765411 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pkb44" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.777223 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.777264 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-wb5jl"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.777498 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.783191 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-57q82"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.783519 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-wb5jl" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.787334 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rxwj8"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.787540 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-57q82" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.790646 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.791917 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.792114 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.804532 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6xz4z"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.804595 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.804743 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.809495 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xc9vh"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.809616 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814205 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-b68tb"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814238 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-tkctz"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814253 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-kk4vd"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814270 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zmlzt"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814284 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-47lhr"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814297 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kr8np"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814407 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814430 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-qnl9q"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814453 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-sljmn"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814468 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-x84b4"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814481 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814496 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-8htc9"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814510 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nhcw5"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814429 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814522 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-k49rf"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814645 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814659 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814677 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814724 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xc9vh"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814740 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814756 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814773 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-r5dqp"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.814789 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-gjccc"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.818051 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-lf9n6"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.818248 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gjccc" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.820364 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qvvjj"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.820423 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-8wzbs"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.820438 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pkb44"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.820454 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wws6k"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.820465 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.820478 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-q8qqz"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.821169 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lf9n6" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.825222 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gjccc"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.825257 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-wb5jl"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.825269 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qbjbk"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.825282 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lf9n6"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.825293 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-57q82"] Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.825407 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-q8qqz" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.840606 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.860607 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.880884 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.900652 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.920589 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.941038 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.960260 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5118]: I1208 19:31:07.981084 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.000917 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.020890 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.044935 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.060656 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.079520 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.099979 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.120110 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.140287 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.160201 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.179993 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.201914 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.220415 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.240658 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.260990 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.281902 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.300720 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.321519 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.341714 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.361744 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.382484 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.400982 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.421262 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.440602 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.461580 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.480472 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.500703 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.521113 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.540468 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.561512 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.580896 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.600987 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.620329 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.638725 5118 request.go:752] "Waited before sending request" delay="1.007383042s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.648851 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.660235 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.782017 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.800996 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.821394 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.840277 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.861931 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.880189 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.902186 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.920639 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.941275 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.960721 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 08 19:31:08 crc kubenswrapper[5118]: I1208 19:31:08.980796 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.001527 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.020756 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.041215 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.061841 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.081217 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.102127 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.122424 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.142342 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.162012 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.181262 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.200464 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.221472 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.321298 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.339729 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.361242 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.380832 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.400025 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.429475 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.441039 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.460740 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.480594 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.500761 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.520624 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.540162 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.560955 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.581673 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.600427 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.620682 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.640373 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.659213 5118 request.go:752] "Waited before sending request" delay="1.866724258s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-sysctl-allowlist&limit=500&resourceVersion=0" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.661902 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 08 19:31:09 crc kubenswrapper[5118]: E1208 19:31:09.694008 5118 projected.go:289] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.700932 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 08 19:31:09 crc kubenswrapper[5118]: E1208 19:31:09.715620 5118 projected.go:289] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.721223 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 19:31:09 crc kubenswrapper[5118]: E1208 19:31:09.737450 5118 projected.go:289] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.740841 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 19:31:09 crc kubenswrapper[5118]: E1208 19:31:09.754219 5118 projected.go:289] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.761363 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.778944 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/97e901dc-7a73-42d4-bbb9-3a7391a79105-available-featuregates\") pod \"openshift-config-operator-5777786469-fz5jn\" (UID: \"97e901dc-7a73-42d4-bbb9-3a7391a79105\") " pod="openshift-config-operator/openshift-config-operator-5777786469-fz5jn" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.778988 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a7dc4f4-9762-4968-b509-c2ee68240e9b-bound-sa-token\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.779009 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/db584c29-faf0-48cd-ac87-3af21a6fcbe4-console-serving-cert\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.779028 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a7dc4f4-9762-4968-b509-c2ee68240e9b-trusted-ca\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.779044 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88131373-e414-436f-83e1-9d4aa4b55f62-config\") pod \"controller-manager-65b6cccf98-kk4vd\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.779061 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b138de57-89ae-4cf5-8136-433862988df2-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-zmlzt\" (UID: \"b138de57-89ae-4cf5-8136-433862988df2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zmlzt" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.779081 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jxd9\" (UniqueName: \"kubernetes.io/projected/e9ee217f-a422-41dc-99a3-72c1dcb1c3e7-kube-api-access-2jxd9\") pod \"openshift-apiserver-operator-846cbfc458-47lhr\" (UID: \"e9ee217f-a422-41dc-99a3-72c1dcb1c3e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-47lhr" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.779111 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97e901dc-7a73-42d4-bbb9-3a7391a79105-serving-cert\") pod \"openshift-config-operator-5777786469-fz5jn\" (UID: \"97e901dc-7a73-42d4-bbb9-3a7391a79105\") " pod="openshift-config-operator/openshift-config-operator-5777786469-fz5jn" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.779220 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db584c29-faf0-48cd-ac87-3af21a6fcbe4-trusted-ca-bundle\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.779270 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88131373-e414-436f-83e1-9d4aa4b55f62-client-ca\") pod \"controller-manager-65b6cccf98-kk4vd\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.779306 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcls6\" (UniqueName: \"kubernetes.io/projected/db584c29-faf0-48cd-ac87-3af21a6fcbe4-kube-api-access-jcls6\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.779334 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b138de57-89ae-4cf5-8136-433862988df2-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-zmlzt\" (UID: \"b138de57-89ae-4cf5-8136-433862988df2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zmlzt" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.779391 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88131373-e414-436f-83e1-9d4aa4b55f62-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-kk4vd\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.779438 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5a7dc4f4-9762-4968-b509-c2ee68240e9b-registry-certificates\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.779470 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9ee217f-a422-41dc-99a3-72c1dcb1c3e7-config\") pod \"openshift-apiserver-operator-846cbfc458-47lhr\" (UID: \"e9ee217f-a422-41dc-99a3-72c1dcb1c3e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-47lhr" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.779499 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/db584c29-faf0-48cd-ac87-3af21a6fcbe4-console-config\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.779531 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88131373-e414-436f-83e1-9d4aa4b55f62-tmp\") pod \"controller-manager-65b6cccf98-kk4vd\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.779560 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx9kk\" (UniqueName: \"kubernetes.io/projected/97e901dc-7a73-42d4-bbb9-3a7391a79105-kube-api-access-wx9kk\") pod \"openshift-config-operator-5777786469-fz5jn\" (UID: \"97e901dc-7a73-42d4-bbb9-3a7391a79105\") " pod="openshift-config-operator/openshift-config-operator-5777786469-fz5jn" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.779649 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fx42\" (UniqueName: \"kubernetes.io/projected/b138de57-89ae-4cf5-8136-433862988df2-kube-api-access-7fx42\") pod \"openshift-controller-manager-operator-686468bdd5-zmlzt\" (UID: \"b138de57-89ae-4cf5-8136-433862988df2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zmlzt" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.779755 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5a7dc4f4-9762-4968-b509-c2ee68240e9b-registry-tls\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.779832 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/db584c29-faf0-48cd-ac87-3af21a6fcbe4-service-ca\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.779932 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9ee217f-a422-41dc-99a3-72c1dcb1c3e7-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-47lhr\" (UID: \"e9ee217f-a422-41dc-99a3-72c1dcb1c3e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-47lhr" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.780558 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hft55\" (UniqueName: \"kubernetes.io/projected/6dcf4602-a9b9-40b0-af37-2a69edc555f0-kube-api-access-hft55\") pod \"cluster-samples-operator-6b564684c8-6xz4z\" (UID: \"6dcf4602-a9b9-40b0-af37-2a69edc555f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6xz4z" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.780928 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgz5w\" (UniqueName: \"kubernetes.io/projected/88131373-e414-436f-83e1-9d4aa4b55f62-kube-api-access-qgz5w\") pod \"controller-manager-65b6cccf98-kk4vd\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.781455 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.781627 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/db584c29-faf0-48cd-ac87-3af21a6fcbe4-console-oauth-config\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:09 crc kubenswrapper[5118]: E1208 19:31:09.784720 5118 projected.go:289] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.785526 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.785700 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6dcf4602-a9b9-40b0-af37-2a69edc555f0-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-6xz4z\" (UID: \"6dcf4602-a9b9-40b0-af37-2a69edc555f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6xz4z" Dec 08 19:31:09 crc kubenswrapper[5118]: E1208 19:31:09.785988 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.285972282 +0000 UTC m=+122.578817739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.786029 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5a7dc4f4-9762-4968-b509-c2ee68240e9b-installation-pull-secrets\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.786061 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjh9l\" (UniqueName: \"kubernetes.io/projected/5a7dc4f4-9762-4968-b509-c2ee68240e9b-kube-api-access-rjh9l\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.786084 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/db584c29-faf0-48cd-ac87-3af21a6fcbe4-oauth-serving-cert\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.786302 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b138de57-89ae-4cf5-8136-433862988df2-config\") pod \"openshift-controller-manager-operator-686468bdd5-zmlzt\" (UID: \"b138de57-89ae-4cf5-8136-433862988df2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zmlzt" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.786459 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5a7dc4f4-9762-4968-b509-c2ee68240e9b-ca-trust-extracted\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.786631 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88131373-e414-436f-83e1-9d4aa4b55f62-serving-cert\") pod \"controller-manager-65b6cccf98-kk4vd\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.801226 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.820940 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.840957 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.859946 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.880107 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.887796 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.887950 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f876ae2-ff59-421f-8f12-b6d980abb001-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-sljmn\" (UID: \"3f876ae2-ff59-421f-8f12-b6d980abb001\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-sljmn" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.887977 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af286630-dbd3-48df-93d0-52acf80a3a67-metrics-tls\") pod \"dns-operator-799b87ffcd-x84b4\" (UID: \"af286630-dbd3-48df-93d0-52acf80a3a67\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x84b4" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.887994 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93bc8cd9-3692-4406-8351-3a273fa1d9c8-webhook-cert\") pod \"packageserver-7d4fc7d867-trgjl\" (UID: \"93bc8cd9-3692-4406-8351-3a273fa1d9c8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888016 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/97e901dc-7a73-42d4-bbb9-3a7391a79105-available-featuregates\") pod \"openshift-config-operator-5777786469-fz5jn\" (UID: \"97e901dc-7a73-42d4-bbb9-3a7391a79105\") " pod="openshift-config-operator/openshift-config-operator-5777786469-fz5jn" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888046 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/db584c29-faf0-48cd-ac87-3af21a6fcbe4-console-serving-cert\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888063 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a7dc4f4-9762-4968-b509-c2ee68240e9b-bound-sa-token\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888079 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfnhk\" (UniqueName: \"kubernetes.io/projected/0b7e81ca-c351-425e-a9e2-ae354f83f8b8-kube-api-access-gfnhk\") pod \"ingress-operator-6b9cb4dbcf-z6tr5\" (UID: \"0b7e81ca-c351-425e-a9e2-ae354f83f8b8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888096 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88131373-e414-436f-83e1-9d4aa4b55f62-config\") pod \"controller-manager-65b6cccf98-kk4vd\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888114 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/58cb15f0-81cf-46ab-8c99-afa4fd7a67d6-registration-dir\") pod \"csi-hostpathplugin-xc9vh\" (UID: \"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888128 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0179285f-606e-490f-b531-c95df3483e77-cert\") pod \"ingress-canary-lf9n6\" (UID: \"0179285f-606e-490f-b531-c95df3483e77\") " pod="openshift-ingress-canary/ingress-canary-lf9n6" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888144 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b138de57-89ae-4cf5-8136-433862988df2-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-zmlzt\" (UID: \"b138de57-89ae-4cf5-8136-433862988df2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zmlzt" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888165 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r84r\" (UniqueName: \"kubernetes.io/projected/f4cffd32-5b39-471d-aacb-44067449bf9a-kube-api-access-9r84r\") pod \"kube-storage-version-migrator-operator-565b79b866-qvvjj\" (UID: \"f4cffd32-5b39-471d-aacb-44067449bf9a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qvvjj" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888183 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93bc8cd9-3692-4406-8351-3a273fa1d9c8-apiservice-cert\") pod \"packageserver-7d4fc7d867-trgjl\" (UID: \"93bc8cd9-3692-4406-8351-3a273fa1d9c8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888209 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab666d86-db2b-4489-a868-8d24159ea775-secret-volume\") pod \"collect-profiles-29420370-s24t5\" (UID: \"ab666d86-db2b-4489-a868-8d24159ea775\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888226 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2jxd9\" (UniqueName: \"kubernetes.io/projected/e9ee217f-a422-41dc-99a3-72c1dcb1c3e7-kube-api-access-2jxd9\") pod \"openshift-apiserver-operator-846cbfc458-47lhr\" (UID: \"e9ee217f-a422-41dc-99a3-72c1dcb1c3e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-47lhr" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888243 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97e901dc-7a73-42d4-bbb9-3a7391a79105-serving-cert\") pod \"openshift-config-operator-5777786469-fz5jn\" (UID: \"97e901dc-7a73-42d4-bbb9-3a7391a79105\") " pod="openshift-config-operator/openshift-config-operator-5777786469-fz5jn" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888260 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/58cb15f0-81cf-46ab-8c99-afa4fd7a67d6-plugins-dir\") pod \"csi-hostpathplugin-xc9vh\" (UID: \"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888277 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq6vj\" (UniqueName: \"kubernetes.io/projected/3a1eebb9-9d59-41be-bf07-445f24f0eb35-kube-api-access-fq6vj\") pod \"multus-admission-controller-69db94689b-r5dqp\" (UID: \"3a1eebb9-9d59-41be-bf07-445f24f0eb35\") " pod="openshift-multus/multus-admission-controller-69db94689b-r5dqp" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888295 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4cffd32-5b39-471d-aacb-44067449bf9a-config\") pod \"kube-storage-version-migrator-operator-565b79b866-qvvjj\" (UID: \"f4cffd32-5b39-471d-aacb-44067449bf9a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qvvjj" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888314 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75d3ab55-5d06-433f-9c10-5113c2f9f367-serving-cert\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888330 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/13ce33b9-2283-4f53-8400-442c0ee364e5-node-bootstrap-token\") pod \"machine-config-server-q8qqz\" (UID: \"13ce33b9-2283-4f53-8400-442c0ee364e5\") " pod="openshift-machine-config-operator/machine-config-server-q8qqz" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888346 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/943f723e-defa-4cda-914e-964cdf480831-tmp\") pod \"marketplace-operator-547dbd544d-8htc9\" (UID: \"943f723e-defa-4cda-914e-964cdf480831\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888363 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab666d86-db2b-4489-a868-8d24159ea775-config-volume\") pod \"collect-profiles-29420370-s24t5\" (UID: \"ab666d86-db2b-4489-a868-8d24159ea775\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888489 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jcls6\" (UniqueName: \"kubernetes.io/projected/db584c29-faf0-48cd-ac87-3af21a6fcbe4-kube-api-access-jcls6\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888558 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b138de57-89ae-4cf5-8136-433862988df2-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-zmlzt\" (UID: \"b138de57-89ae-4cf5-8136-433862988df2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zmlzt" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888593 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6574d02-8035-49ea-8d01-df1b3c1d1433-serving-cert\") pod \"service-ca-operator-5b9c976747-57q82\" (UID: \"a6574d02-8035-49ea-8d01-df1b3c1d1433\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-57q82" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888647 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88131373-e414-436f-83e1-9d4aa4b55f62-tmp\") pod \"controller-manager-65b6cccf98-kk4vd\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888671 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wx9kk\" (UniqueName: \"kubernetes.io/projected/97e901dc-7a73-42d4-bbb9-3a7391a79105-kube-api-access-wx9kk\") pod \"openshift-config-operator-5777786469-fz5jn\" (UID: \"97e901dc-7a73-42d4-bbb9-3a7391a79105\") " pod="openshift-config-operator/openshift-config-operator-5777786469-fz5jn" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888826 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5a7dc4f4-9762-4968-b509-c2ee68240e9b-registry-certificates\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888853 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9ee217f-a422-41dc-99a3-72c1dcb1c3e7-config\") pod \"openshift-apiserver-operator-846cbfc458-47lhr\" (UID: \"e9ee217f-a422-41dc-99a3-72c1dcb1c3e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-47lhr" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888875 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/13ce33b9-2283-4f53-8400-442c0ee364e5-certs\") pod \"machine-config-server-q8qqz\" (UID: \"13ce33b9-2283-4f53-8400-442c0ee364e5\") " pod="openshift-machine-config-operator/machine-config-server-q8qqz" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888900 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/943f723e-defa-4cda-914e-964cdf480831-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-8htc9\" (UID: \"943f723e-defa-4cda-914e-964cdf480831\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888924 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf47c\" (UniqueName: \"kubernetes.io/projected/df7bb012-1926-4cf5-97ee-990d99a956b7-kube-api-access-lf47c\") pod \"migrator-866fcbc849-8wzbs\" (UID: \"df7bb012-1926-4cf5-97ee-990d99a956b7\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-8wzbs" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888943 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/75d3ab55-5d06-433f-9c10-5113c2f9f367-tmp-dir\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.888963 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf4n4\" (UniqueName: \"kubernetes.io/projected/93bc8cd9-3692-4406-8351-3a273fa1d9c8-kube-api-access-hf4n4\") pod \"packageserver-7d4fc7d867-trgjl\" (UID: \"93bc8cd9-3692-4406-8351-3a273fa1d9c8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.889032 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6574d02-8035-49ea-8d01-df1b3c1d1433-config\") pod \"service-ca-operator-5b9c976747-57q82\" (UID: \"a6574d02-8035-49ea-8d01-df1b3c1d1433\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-57q82" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.889051 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/943f723e-defa-4cda-914e-964cdf480831-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-8htc9\" (UID: \"943f723e-defa-4cda-914e-964cdf480831\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.889267 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/97b35cab-0a8d-4331-8724-cbe640b9e24c-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-gtsl2\" (UID: \"97b35cab-0a8d-4331-8724-cbe640b9e24c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.889293 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p894h\" (UniqueName: \"kubernetes.io/projected/943f723e-defa-4cda-914e-964cdf480831-kube-api-access-p894h\") pod \"marketplace-operator-547dbd544d-8htc9\" (UID: \"943f723e-defa-4cda-914e-964cdf480831\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.889312 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35446e1e-d728-44f3-b17f-372a50dbcb73-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-wws6k\" (UID: \"35446e1e-d728-44f3-b17f-372a50dbcb73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wws6k" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.889329 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/54da62c3-ab33-49b0-bc8e-27ed0cb9212a-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-wjdlj\" (UID: \"54da62c3-ab33-49b0-bc8e-27ed0cb9212a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.889348 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d3ab55-5d06-433f-9c10-5113c2f9f367-config\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.889365 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e349745-eed9-4471-abae-b45e90ce805d-config\") pod \"kube-apiserver-operator-575994946d-kr8np\" (UID: \"4e349745-eed9-4471-abae-b45e90ce805d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kr8np" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.889383 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/54da62c3-ab33-49b0-bc8e-27ed0cb9212a-tmp\") pod \"cluster-image-registry-operator-86c45576b9-wjdlj\" (UID: \"54da62c3-ab33-49b0-bc8e-27ed0cb9212a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.889401 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/97876313-0296-4efa-b7ea-403570a2cd81-signing-key\") pod \"service-ca-74545575db-wb5jl\" (UID: \"97876313-0296-4efa-b7ea-403570a2cd81\") " pod="openshift-service-ca/service-ca-74545575db-wb5jl" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.889431 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qgz5w\" (UniqueName: \"kubernetes.io/projected/88131373-e414-436f-83e1-9d4aa4b55f62-kube-api-access-qgz5w\") pod \"controller-manager-65b6cccf98-kk4vd\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.889454 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e349745-eed9-4471-abae-b45e90ce805d-kube-api-access\") pod \"kube-apiserver-operator-575994946d-kr8np\" (UID: \"4e349745-eed9-4471-abae-b45e90ce805d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kr8np" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.889474 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/58cb15f0-81cf-46ab-8c99-afa4fd7a67d6-mountpoint-dir\") pod \"csi-hostpathplugin-xc9vh\" (UID: \"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.889492 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tstr\" (UniqueName: \"kubernetes.io/projected/58cb15f0-81cf-46ab-8c99-afa4fd7a67d6-kube-api-access-6tstr\") pod \"csi-hostpathplugin-xc9vh\" (UID: \"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" Dec 08 19:31:09 crc kubenswrapper[5118]: E1208 19:31:09.889677 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.389659406 +0000 UTC m=+122.682504863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.889759 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/574501a5-bb4b-4c42-9046-e00bc9447f56-ready\") pod \"cni-sysctl-allowlist-ds-rxwj8\" (UID: \"574501a5-bb4b-4c42-9046-e00bc9447f56\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.889784 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6dcf4602-a9b9-40b0-af37-2a69edc555f0-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-6xz4z\" (UID: \"6dcf4602-a9b9-40b0-af37-2a69edc555f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6xz4z" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.889806 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f876ae2-ff59-421f-8f12-b6d980abb001-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-sljmn\" (UID: \"3f876ae2-ff59-421f-8f12-b6d980abb001\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-sljmn" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.889895 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/97e901dc-7a73-42d4-bbb9-3a7391a79105-available-featuregates\") pod \"openshift-config-operator-5777786469-fz5jn\" (UID: \"97e901dc-7a73-42d4-bbb9-3a7391a79105\") " pod="openshift-config-operator/openshift-config-operator-5777786469-fz5jn" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.890247 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b138de57-89ae-4cf5-8136-433862988df2-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-zmlzt\" (UID: \"b138de57-89ae-4cf5-8136-433862988df2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zmlzt" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.890592 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2a73f457-25de-4a7a-8b9b-d4fccf4c27fb-profile-collector-cert\") pod \"olm-operator-5cdf44d969-wbzcd\" (UID: \"2a73f457-25de-4a7a-8b9b-d4fccf4c27fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.890620 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e349745-eed9-4471-abae-b45e90ce805d-serving-cert\") pod \"kube-apiserver-operator-575994946d-kr8np\" (UID: \"4e349745-eed9-4471-abae-b45e90ce805d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kr8np" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.890638 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4e349745-eed9-4471-abae-b45e90ce805d-tmp-dir\") pod \"kube-apiserver-operator-575994946d-kr8np\" (UID: \"4e349745-eed9-4471-abae-b45e90ce805d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kr8np" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.890662 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/72043ba9-5052-46eb-8c7c-2e61734cfd17-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qbjbk\" (UID: \"72043ba9-5052-46eb-8c7c-2e61734cfd17\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qbjbk" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.890714 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4twn\" (UniqueName: \"kubernetes.io/projected/97876313-0296-4efa-b7ea-403570a2cd81-kube-api-access-f4twn\") pod \"service-ca-74545575db-wb5jl\" (UID: \"97876313-0296-4efa-b7ea-403570a2cd81\") " pod="openshift-service-ca/service-ca-74545575db-wb5jl" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.890767 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2a73f457-25de-4a7a-8b9b-d4fccf4c27fb-tmpfs\") pod \"olm-operator-5cdf44d969-wbzcd\" (UID: \"2a73f457-25de-4a7a-8b9b-d4fccf4c27fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.890787 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlqfv\" (UniqueName: \"kubernetes.io/projected/574501a5-bb4b-4c42-9046-e00bc9447f56-kube-api-access-jlqfv\") pod \"cni-sysctl-allowlist-ds-rxwj8\" (UID: \"574501a5-bb4b-4c42-9046-e00bc9447f56\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.890852 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f876ae2-ff59-421f-8f12-b6d980abb001-config\") pod \"kube-controller-manager-operator-69d5f845f8-sljmn\" (UID: \"3f876ae2-ff59-421f-8f12-b6d980abb001\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-sljmn" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.890871 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9f8fbbac-99ac-4a11-9f93-610d12177e71-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-q45w8\" (UID: \"9f8fbbac-99ac-4a11-9f93-610d12177e71\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.890921 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k622\" (UniqueName: \"kubernetes.io/projected/a6574d02-8035-49ea-8d01-df1b3c1d1433-kube-api-access-6k622\") pod \"service-ca-operator-5b9c976747-57q82\" (UID: \"a6574d02-8035-49ea-8d01-df1b3c1d1433\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-57q82" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.890951 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0b7e81ca-c351-425e-a9e2-ae354f83f8b8-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-z6tr5\" (UID: \"0b7e81ca-c351-425e-a9e2-ae354f83f8b8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.890983 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0b7e81ca-c351-425e-a9e2-ae354f83f8b8-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-z6tr5\" (UID: \"0b7e81ca-c351-425e-a9e2-ae354f83f8b8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.891001 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/90514180-5ed3-4eb5-b13e-cd3b90998a22-stats-auth\") pod \"router-default-68cf44c8b8-vzpzx\" (UID: \"90514180-5ed3-4eb5-b13e-cd3b90998a22\") " pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.891018 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4cffd32-5b39-471d-aacb-44067449bf9a-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-qvvjj\" (UID: \"f4cffd32-5b39-471d-aacb-44067449bf9a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qvvjj" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.891048 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a7dc4f4-9762-4968-b509-c2ee68240e9b-trusted-ca\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.891088 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/75d3ab55-5d06-433f-9c10-5113c2f9f367-etcd-ca\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.891254 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kssmv\" (UniqueName: \"kubernetes.io/projected/2a73f457-25de-4a7a-8b9b-d4fccf4c27fb-kube-api-access-kssmv\") pod \"olm-operator-5cdf44d969-wbzcd\" (UID: \"2a73f457-25de-4a7a-8b9b-d4fccf4c27fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.891281 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q2ks\" (UniqueName: \"kubernetes.io/projected/9f8fbbac-99ac-4a11-9f93-610d12177e71-kube-api-access-8q2ks\") pod \"machine-config-operator-67c9d58cbb-q45w8\" (UID: \"9f8fbbac-99ac-4a11-9f93-610d12177e71\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.891300 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/af286630-dbd3-48df-93d0-52acf80a3a67-tmp-dir\") pod \"dns-operator-799b87ffcd-x84b4\" (UID: \"af286630-dbd3-48df-93d0-52acf80a3a67\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x84b4" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.891514 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/97b35cab-0a8d-4331-8724-cbe640b9e24c-srv-cert\") pod \"catalog-operator-75ff9f647d-gtsl2\" (UID: \"97b35cab-0a8d-4331-8724-cbe640b9e24c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.891540 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/58cb15f0-81cf-46ab-8c99-afa4fd7a67d6-socket-dir\") pod \"csi-hostpathplugin-xc9vh\" (UID: \"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.891558 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/93bc8cd9-3692-4406-8351-3a273fa1d9c8-tmpfs\") pod \"packageserver-7d4fc7d867-trgjl\" (UID: \"93bc8cd9-3692-4406-8351-3a273fa1d9c8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.891593 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a73f457-25de-4a7a-8b9b-d4fccf4c27fb-srv-cert\") pod \"olm-operator-5cdf44d969-wbzcd\" (UID: \"2a73f457-25de-4a7a-8b9b-d4fccf4c27fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.891664 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35446e1e-d728-44f3-b17f-372a50dbcb73-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-wws6k\" (UID: \"35446e1e-d728-44f3-b17f-372a50dbcb73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wws6k" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.891790 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/97876313-0296-4efa-b7ea-403570a2cd81-signing-cabundle\") pod \"service-ca-74545575db-wb5jl\" (UID: \"97876313-0296-4efa-b7ea-403570a2cd81\") " pod="openshift-service-ca/service-ca-74545575db-wb5jl" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.891831 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5a7dc4f4-9762-4968-b509-c2ee68240e9b-registry-certificates\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892014 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db584c29-faf0-48cd-ac87-3af21a6fcbe4-trusted-ca-bundle\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892096 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f76aa800-8554-45e3-ab38-e5b8efd7c3ad-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-nhcw5\" (UID: \"f76aa800-8554-45e3-ab38-e5b8efd7c3ad\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nhcw5" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892139 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d19f60aa-72cf-4a40-a402-300df68ad28f-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-pkb44\" (UID: \"d19f60aa-72cf-4a40-a402-300df68ad28f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pkb44" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892174 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/54da62c3-ab33-49b0-bc8e-27ed0cb9212a-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-wjdlj\" (UID: \"54da62c3-ab33-49b0-bc8e-27ed0cb9212a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892259 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88131373-e414-436f-83e1-9d4aa4b55f62-client-ca\") pod \"controller-manager-65b6cccf98-kk4vd\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892288 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/75d3ab55-5d06-433f-9c10-5113c2f9f367-etcd-service-ca\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892310 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/35446e1e-d728-44f3-b17f-372a50dbcb73-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-wws6k\" (UID: \"35446e1e-d728-44f3-b17f-372a50dbcb73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wws6k" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892337 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88131373-e414-436f-83e1-9d4aa4b55f62-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-kk4vd\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892359 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/90514180-5ed3-4eb5-b13e-cd3b90998a22-default-certificate\") pod \"router-default-68cf44c8b8-vzpzx\" (UID: \"90514180-5ed3-4eb5-b13e-cd3b90998a22\") " pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892428 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b309434e-b723-47e5-bce5-30f0c1ca2a1e-config-volume\") pod \"dns-default-gjccc\" (UID: \"b309434e-b723-47e5-bce5-30f0c1ca2a1e\") " pod="openshift-dns/dns-default-gjccc" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892515 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjpbn\" (UniqueName: \"kubernetes.io/projected/b309434e-b723-47e5-bce5-30f0c1ca2a1e-kube-api-access-sjpbn\") pod \"dns-default-gjccc\" (UID: \"b309434e-b723-47e5-bce5-30f0c1ca2a1e\") " pod="openshift-dns/dns-default-gjccc" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892548 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54da62c3-ab33-49b0-bc8e-27ed0cb9212a-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-wjdlj\" (UID: \"54da62c3-ab33-49b0-bc8e-27ed0cb9212a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892584 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/db584c29-faf0-48cd-ac87-3af21a6fcbe4-console-config\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892609 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7fx42\" (UniqueName: \"kubernetes.io/projected/b138de57-89ae-4cf5-8136-433862988df2-kube-api-access-7fx42\") pod \"openshift-controller-manager-operator-686468bdd5-zmlzt\" (UID: \"b138de57-89ae-4cf5-8136-433862988df2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zmlzt" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892629 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r59v\" (UniqueName: \"kubernetes.io/projected/f76aa800-8554-45e3-ab38-e5b8efd7c3ad-kube-api-access-6r59v\") pod \"machine-config-controller-f9cdd68f7-nhcw5\" (UID: \"f76aa800-8554-45e3-ab38-e5b8efd7c3ad\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nhcw5" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892650 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4xp6\" (UniqueName: \"kubernetes.io/projected/90514180-5ed3-4eb5-b13e-cd3b90998a22-kube-api-access-q4xp6\") pod \"router-default-68cf44c8b8-vzpzx\" (UID: \"90514180-5ed3-4eb5-b13e-cd3b90998a22\") " pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892739 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/75d3ab55-5d06-433f-9c10-5113c2f9f367-etcd-client\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892796 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35446e1e-d728-44f3-b17f-372a50dbcb73-config\") pod \"openshift-kube-scheduler-operator-54f497555d-wws6k\" (UID: \"35446e1e-d728-44f3-b17f-372a50dbcb73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wws6k" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892817 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9f8fbbac-99ac-4a11-9f93-610d12177e71-images\") pod \"machine-config-operator-67c9d58cbb-q45w8\" (UID: \"9f8fbbac-99ac-4a11-9f93-610d12177e71\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892840 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl7xg\" (UniqueName: \"kubernetes.io/projected/af286630-dbd3-48df-93d0-52acf80a3a67-kube-api-access-bl7xg\") pod \"dns-operator-799b87ffcd-x84b4\" (UID: \"af286630-dbd3-48df-93d0-52acf80a3a67\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x84b4" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892868 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ltdq\" (UniqueName: \"kubernetes.io/projected/72043ba9-5052-46eb-8c7c-2e61734cfd17-kube-api-access-2ltdq\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qbjbk\" (UID: \"72043ba9-5052-46eb-8c7c-2e61734cfd17\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qbjbk" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892919 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b309434e-b723-47e5-bce5-30f0c1ca2a1e-metrics-tls\") pod \"dns-default-gjccc\" (UID: \"b309434e-b723-47e5-bce5-30f0c1ca2a1e\") " pod="openshift-dns/dns-default-gjccc" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.892960 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5a7dc4f4-9762-4968-b509-c2ee68240e9b-registry-tls\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.893026 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5k8h\" (UniqueName: \"kubernetes.io/projected/97b35cab-0a8d-4331-8724-cbe640b9e24c-kube-api-access-x5k8h\") pod \"catalog-operator-75ff9f647d-gtsl2\" (UID: \"97b35cab-0a8d-4331-8724-cbe640b9e24c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.893053 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/db584c29-faf0-48cd-ac87-3af21a6fcbe4-service-ca\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.893076 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj2l8\" (UniqueName: \"kubernetes.io/projected/75d3ab55-5d06-433f-9c10-5113c2f9f367-kube-api-access-xj2l8\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.893098 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfnfk\" (UniqueName: \"kubernetes.io/projected/0179285f-606e-490f-b531-c95df3483e77-kube-api-access-dfnfk\") pod \"ingress-canary-lf9n6\" (UID: \"0179285f-606e-490f-b531-c95df3483e77\") " pod="openshift-ingress-canary/ingress-canary-lf9n6" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.894073 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88131373-e414-436f-83e1-9d4aa4b55f62-tmp\") pod \"controller-manager-65b6cccf98-kk4vd\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.895469 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9ee217f-a422-41dc-99a3-72c1dcb1c3e7-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-47lhr\" (UID: \"e9ee217f-a422-41dc-99a3-72c1dcb1c3e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-47lhr" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.895519 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfx72\" (UniqueName: \"kubernetes.io/projected/d19f60aa-72cf-4a40-a402-300df68ad28f-kube-api-access-gfx72\") pod \"package-server-manager-77f986bd66-pkb44\" (UID: \"d19f60aa-72cf-4a40-a402-300df68ad28f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pkb44" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.895545 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97b35cab-0a8d-4331-8724-cbe640b9e24c-tmpfs\") pod \"catalog-operator-75ff9f647d-gtsl2\" (UID: \"97b35cab-0a8d-4331-8724-cbe640b9e24c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.895578 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hft55\" (UniqueName: \"kubernetes.io/projected/6dcf4602-a9b9-40b0-af37-2a69edc555f0-kube-api-access-hft55\") pod \"cluster-samples-operator-6b564684c8-6xz4z\" (UID: \"6dcf4602-a9b9-40b0-af37-2a69edc555f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6xz4z" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.895628 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/58cb15f0-81cf-46ab-8c99-afa4fd7a67d6-csi-data-dir\") pod \"csi-hostpathplugin-xc9vh\" (UID: \"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.895650 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/574501a5-bb4b-4c42-9046-e00bc9447f56-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rxwj8\" (UID: \"574501a5-bb4b-4c42-9046-e00bc9447f56\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.896293 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3a1eebb9-9d59-41be-bf07-445f24f0eb35-webhook-certs\") pod \"multus-admission-controller-69db94689b-r5dqp\" (UID: \"3a1eebb9-9d59-41be-bf07-445f24f0eb35\") " pod="openshift-multus/multus-admission-controller-69db94689b-r5dqp" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.896343 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bstcl\" (UniqueName: \"kubernetes.io/projected/ab666d86-db2b-4489-a868-8d24159ea775-kube-api-access-bstcl\") pod \"collect-profiles-29420370-s24t5\" (UID: \"ab666d86-db2b-4489-a868-8d24159ea775\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.896374 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/54da62c3-ab33-49b0-bc8e-27ed0cb9212a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-wjdlj\" (UID: \"54da62c3-ab33-49b0-bc8e-27ed0cb9212a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.896415 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/db584c29-faf0-48cd-ac87-3af21a6fcbe4-console-oauth-config\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.896440 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f76aa800-8554-45e3-ab38-e5b8efd7c3ad-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-nhcw5\" (UID: \"f76aa800-8554-45e3-ab38-e5b8efd7c3ad\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nhcw5" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.896465 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0b7e81ca-c351-425e-a9e2-ae354f83f8b8-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-z6tr5\" (UID: \"0b7e81ca-c351-425e-a9e2-ae354f83f8b8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.896512 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/db584c29-faf0-48cd-ac87-3af21a6fcbe4-oauth-serving-cert\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.896545 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90514180-5ed3-4eb5-b13e-cd3b90998a22-metrics-certs\") pod \"router-default-68cf44c8b8-vzpzx\" (UID: \"90514180-5ed3-4eb5-b13e-cd3b90998a22\") " pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.898118 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.898262 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b309434e-b723-47e5-bce5-30f0c1ca2a1e-tmp-dir\") pod \"dns-default-gjccc\" (UID: \"b309434e-b723-47e5-bce5-30f0c1ca2a1e\") " pod="openshift-dns/dns-default-gjccc" Dec 08 19:31:09 crc kubenswrapper[5118]: E1208 19:31:09.898782 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.398752699 +0000 UTC m=+122.691598156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.899495 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/574501a5-bb4b-4c42-9046-e00bc9447f56-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rxwj8\" (UID: \"574501a5-bb4b-4c42-9046-e00bc9447f56\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.899567 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5a7dc4f4-9762-4968-b509-c2ee68240e9b-installation-pull-secrets\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.899623 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rjh9l\" (UniqueName: \"kubernetes.io/projected/5a7dc4f4-9762-4968-b509-c2ee68240e9b-kube-api-access-rjh9l\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.899653 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b138de57-89ae-4cf5-8136-433862988df2-config\") pod \"openshift-controller-manager-operator-686468bdd5-zmlzt\" (UID: \"b138de57-89ae-4cf5-8136-433862988df2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zmlzt" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.900073 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5a7dc4f4-9762-4968-b509-c2ee68240e9b-ca-trust-extracted\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.900123 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3f876ae2-ff59-421f-8f12-b6d980abb001-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-sljmn\" (UID: \"3f876ae2-ff59-421f-8f12-b6d980abb001\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-sljmn" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.900153 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp9tk\" (UniqueName: \"kubernetes.io/projected/13ce33b9-2283-4f53-8400-442c0ee364e5-kube-api-access-pp9tk\") pod \"machine-config-server-q8qqz\" (UID: \"13ce33b9-2283-4f53-8400-442c0ee364e5\") " pod="openshift-machine-config-operator/machine-config-server-q8qqz" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.903597 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6mk6\" (UniqueName: \"kubernetes.io/projected/54da62c3-ab33-49b0-bc8e-27ed0cb9212a-kube-api-access-j6mk6\") pod \"cluster-image-registry-operator-86c45576b9-wjdlj\" (UID: \"54da62c3-ab33-49b0-bc8e-27ed0cb9212a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.904400 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88131373-e414-436f-83e1-9d4aa4b55f62-serving-cert\") pod \"controller-manager-65b6cccf98-kk4vd\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.904537 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f8fbbac-99ac-4a11-9f93-610d12177e71-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-q45w8\" (UID: \"9f8fbbac-99ac-4a11-9f93-610d12177e71\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.904633 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5a7dc4f4-9762-4968-b509-c2ee68240e9b-ca-trust-extracted\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.904947 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90514180-5ed3-4eb5-b13e-cd3b90998a22-service-ca-bundle\") pod \"router-default-68cf44c8b8-vzpzx\" (UID: \"90514180-5ed3-4eb5-b13e-cd3b90998a22\") " pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.905673 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.921988 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.941111 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 08 19:31:09 crc kubenswrapper[5118]: I1208 19:31:09.980720 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.001212 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.005567 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.005881 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90514180-5ed3-4eb5-b13e-cd3b90998a22-metrics-certs\") pod \"router-default-68cf44c8b8-vzpzx\" (UID: \"90514180-5ed3-4eb5-b13e-cd3b90998a22\") " pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.005921 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b309434e-b723-47e5-bce5-30f0c1ca2a1e-tmp-dir\") pod \"dns-default-gjccc\" (UID: \"b309434e-b723-47e5-bce5-30f0c1ca2a1e\") " pod="openshift-dns/dns-default-gjccc" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.005947 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/574501a5-bb4b-4c42-9046-e00bc9447f56-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rxwj8\" (UID: \"574501a5-bb4b-4c42-9046-e00bc9447f56\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.005968 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3f876ae2-ff59-421f-8f12-b6d980abb001-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-sljmn\" (UID: \"3f876ae2-ff59-421f-8f12-b6d980abb001\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-sljmn" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.005985 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pp9tk\" (UniqueName: \"kubernetes.io/projected/13ce33b9-2283-4f53-8400-442c0ee364e5-kube-api-access-pp9tk\") pod \"machine-config-server-q8qqz\" (UID: \"13ce33b9-2283-4f53-8400-442c0ee364e5\") " pod="openshift-machine-config-operator/machine-config-server-q8qqz" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006004 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j6mk6\" (UniqueName: \"kubernetes.io/projected/54da62c3-ab33-49b0-bc8e-27ed0cb9212a-kube-api-access-j6mk6\") pod \"cluster-image-registry-operator-86c45576b9-wjdlj\" (UID: \"54da62c3-ab33-49b0-bc8e-27ed0cb9212a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006030 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f8fbbac-99ac-4a11-9f93-610d12177e71-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-q45w8\" (UID: \"9f8fbbac-99ac-4a11-9f93-610d12177e71\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006047 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90514180-5ed3-4eb5-b13e-cd3b90998a22-service-ca-bundle\") pod \"router-default-68cf44c8b8-vzpzx\" (UID: \"90514180-5ed3-4eb5-b13e-cd3b90998a22\") " pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006074 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f876ae2-ff59-421f-8f12-b6d980abb001-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-sljmn\" (UID: \"3f876ae2-ff59-421f-8f12-b6d980abb001\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-sljmn" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006103 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af286630-dbd3-48df-93d0-52acf80a3a67-metrics-tls\") pod \"dns-operator-799b87ffcd-x84b4\" (UID: \"af286630-dbd3-48df-93d0-52acf80a3a67\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x84b4" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006125 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93bc8cd9-3692-4406-8351-3a273fa1d9c8-webhook-cert\") pod \"packageserver-7d4fc7d867-trgjl\" (UID: \"93bc8cd9-3692-4406-8351-3a273fa1d9c8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006157 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gfnhk\" (UniqueName: \"kubernetes.io/projected/0b7e81ca-c351-425e-a9e2-ae354f83f8b8-kube-api-access-gfnhk\") pod \"ingress-operator-6b9cb4dbcf-z6tr5\" (UID: \"0b7e81ca-c351-425e-a9e2-ae354f83f8b8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006201 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/58cb15f0-81cf-46ab-8c99-afa4fd7a67d6-registration-dir\") pod \"csi-hostpathplugin-xc9vh\" (UID: \"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006224 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0179285f-606e-490f-b531-c95df3483e77-cert\") pod \"ingress-canary-lf9n6\" (UID: \"0179285f-606e-490f-b531-c95df3483e77\") " pod="openshift-ingress-canary/ingress-canary-lf9n6" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006250 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9r84r\" (UniqueName: \"kubernetes.io/projected/f4cffd32-5b39-471d-aacb-44067449bf9a-kube-api-access-9r84r\") pod \"kube-storage-version-migrator-operator-565b79b866-qvvjj\" (UID: \"f4cffd32-5b39-471d-aacb-44067449bf9a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qvvjj" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006277 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93bc8cd9-3692-4406-8351-3a273fa1d9c8-apiservice-cert\") pod \"packageserver-7d4fc7d867-trgjl\" (UID: \"93bc8cd9-3692-4406-8351-3a273fa1d9c8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006305 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab666d86-db2b-4489-a868-8d24159ea775-secret-volume\") pod \"collect-profiles-29420370-s24t5\" (UID: \"ab666d86-db2b-4489-a868-8d24159ea775\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006329 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/58cb15f0-81cf-46ab-8c99-afa4fd7a67d6-plugins-dir\") pod \"csi-hostpathplugin-xc9vh\" (UID: \"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006356 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fq6vj\" (UniqueName: \"kubernetes.io/projected/3a1eebb9-9d59-41be-bf07-445f24f0eb35-kube-api-access-fq6vj\") pod \"multus-admission-controller-69db94689b-r5dqp\" (UID: \"3a1eebb9-9d59-41be-bf07-445f24f0eb35\") " pod="openshift-multus/multus-admission-controller-69db94689b-r5dqp" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006381 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4cffd32-5b39-471d-aacb-44067449bf9a-config\") pod \"kube-storage-version-migrator-operator-565b79b866-qvvjj\" (UID: \"f4cffd32-5b39-471d-aacb-44067449bf9a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qvvjj" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006405 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75d3ab55-5d06-433f-9c10-5113c2f9f367-serving-cert\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006428 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/13ce33b9-2283-4f53-8400-442c0ee364e5-node-bootstrap-token\") pod \"machine-config-server-q8qqz\" (UID: \"13ce33b9-2283-4f53-8400-442c0ee364e5\") " pod="openshift-machine-config-operator/machine-config-server-q8qqz" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006450 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/943f723e-defa-4cda-914e-964cdf480831-tmp\") pod \"marketplace-operator-547dbd544d-8htc9\" (UID: \"943f723e-defa-4cda-914e-964cdf480831\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006476 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab666d86-db2b-4489-a868-8d24159ea775-config-volume\") pod \"collect-profiles-29420370-s24t5\" (UID: \"ab666d86-db2b-4489-a868-8d24159ea775\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006508 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6574d02-8035-49ea-8d01-df1b3c1d1433-serving-cert\") pod \"service-ca-operator-5b9c976747-57q82\" (UID: \"a6574d02-8035-49ea-8d01-df1b3c1d1433\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-57q82" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006541 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/13ce33b9-2283-4f53-8400-442c0ee364e5-certs\") pod \"machine-config-server-q8qqz\" (UID: \"13ce33b9-2283-4f53-8400-442c0ee364e5\") " pod="openshift-machine-config-operator/machine-config-server-q8qqz" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006558 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/943f723e-defa-4cda-914e-964cdf480831-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-8htc9\" (UID: \"943f723e-defa-4cda-914e-964cdf480831\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006578 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lf47c\" (UniqueName: \"kubernetes.io/projected/df7bb012-1926-4cf5-97ee-990d99a956b7-kube-api-access-lf47c\") pod \"migrator-866fcbc849-8wzbs\" (UID: \"df7bb012-1926-4cf5-97ee-990d99a956b7\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-8wzbs" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006595 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/75d3ab55-5d06-433f-9c10-5113c2f9f367-tmp-dir\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006611 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hf4n4\" (UniqueName: \"kubernetes.io/projected/93bc8cd9-3692-4406-8351-3a273fa1d9c8-kube-api-access-hf4n4\") pod \"packageserver-7d4fc7d867-trgjl\" (UID: \"93bc8cd9-3692-4406-8351-3a273fa1d9c8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.006644 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6574d02-8035-49ea-8d01-df1b3c1d1433-config\") pod \"service-ca-operator-5b9c976747-57q82\" (UID: \"a6574d02-8035-49ea-8d01-df1b3c1d1433\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-57q82" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.007416 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6574d02-8035-49ea-8d01-df1b3c1d1433-config\") pod \"service-ca-operator-5b9c976747-57q82\" (UID: \"a6574d02-8035-49ea-8d01-df1b3c1d1433\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-57q82" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.007566 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/574501a5-bb4b-4c42-9046-e00bc9447f56-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rxwj8\" (UID: \"574501a5-bb4b-4c42-9046-e00bc9447f56\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.008079 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/58cb15f0-81cf-46ab-8c99-afa4fd7a67d6-plugins-dir\") pod \"csi-hostpathplugin-xc9vh\" (UID: \"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.008147 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b309434e-b723-47e5-bce5-30f0c1ca2a1e-tmp-dir\") pod \"dns-default-gjccc\" (UID: \"b309434e-b723-47e5-bce5-30f0c1ca2a1e\") " pod="openshift-dns/dns-default-gjccc" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.008274 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/75d3ab55-5d06-433f-9c10-5113c2f9f367-tmp-dir\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.008297 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.508256308 +0000 UTC m=+122.801101915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.008976 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3f876ae2-ff59-421f-8f12-b6d980abb001-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-sljmn\" (UID: \"3f876ae2-ff59-421f-8f12-b6d980abb001\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-sljmn" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.008775 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f8fbbac-99ac-4a11-9f93-610d12177e71-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-q45w8\" (UID: \"9f8fbbac-99ac-4a11-9f93-610d12177e71\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.008993 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/943f723e-defa-4cda-914e-964cdf480831-tmp\") pod \"marketplace-operator-547dbd544d-8htc9\" (UID: \"943f723e-defa-4cda-914e-964cdf480831\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.009180 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/58cb15f0-81cf-46ab-8c99-afa4fd7a67d6-registration-dir\") pod \"csi-hostpathplugin-xc9vh\" (UID: \"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.009217 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/943f723e-defa-4cda-914e-964cdf480831-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-8htc9\" (UID: \"943f723e-defa-4cda-914e-964cdf480831\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.009747 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90514180-5ed3-4eb5-b13e-cd3b90998a22-service-ca-bundle\") pod \"router-default-68cf44c8b8-vzpzx\" (UID: \"90514180-5ed3-4eb5-b13e-cd3b90998a22\") " pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.009759 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/943f723e-defa-4cda-914e-964cdf480831-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-8htc9\" (UID: \"943f723e-defa-4cda-914e-964cdf480831\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010043 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/97b35cab-0a8d-4331-8724-cbe640b9e24c-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-gtsl2\" (UID: \"97b35cab-0a8d-4331-8724-cbe640b9e24c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010087 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p894h\" (UniqueName: \"kubernetes.io/projected/943f723e-defa-4cda-914e-964cdf480831-kube-api-access-p894h\") pod \"marketplace-operator-547dbd544d-8htc9\" (UID: \"943f723e-defa-4cda-914e-964cdf480831\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010122 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35446e1e-d728-44f3-b17f-372a50dbcb73-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-wws6k\" (UID: \"35446e1e-d728-44f3-b17f-372a50dbcb73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wws6k" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010122 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab666d86-db2b-4489-a868-8d24159ea775-config-volume\") pod \"collect-profiles-29420370-s24t5\" (UID: \"ab666d86-db2b-4489-a868-8d24159ea775\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010149 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/54da62c3-ab33-49b0-bc8e-27ed0cb9212a-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-wjdlj\" (UID: \"54da62c3-ab33-49b0-bc8e-27ed0cb9212a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010192 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d3ab55-5d06-433f-9c10-5113c2f9f367-config\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010219 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e349745-eed9-4471-abae-b45e90ce805d-config\") pod \"kube-apiserver-operator-575994946d-kr8np\" (UID: \"4e349745-eed9-4471-abae-b45e90ce805d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kr8np" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010248 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/54da62c3-ab33-49b0-bc8e-27ed0cb9212a-tmp\") pod \"cluster-image-registry-operator-86c45576b9-wjdlj\" (UID: \"54da62c3-ab33-49b0-bc8e-27ed0cb9212a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010276 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/97876313-0296-4efa-b7ea-403570a2cd81-signing-key\") pod \"service-ca-74545575db-wb5jl\" (UID: \"97876313-0296-4efa-b7ea-403570a2cd81\") " pod="openshift-service-ca/service-ca-74545575db-wb5jl" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010285 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4cffd32-5b39-471d-aacb-44067449bf9a-config\") pod \"kube-storage-version-migrator-operator-565b79b866-qvvjj\" (UID: \"f4cffd32-5b39-471d-aacb-44067449bf9a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qvvjj" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010360 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e349745-eed9-4471-abae-b45e90ce805d-kube-api-access\") pod \"kube-apiserver-operator-575994946d-kr8np\" (UID: \"4e349745-eed9-4471-abae-b45e90ce805d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kr8np" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010395 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/58cb15f0-81cf-46ab-8c99-afa4fd7a67d6-mountpoint-dir\") pod \"csi-hostpathplugin-xc9vh\" (UID: \"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010423 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6tstr\" (UniqueName: \"kubernetes.io/projected/58cb15f0-81cf-46ab-8c99-afa4fd7a67d6-kube-api-access-6tstr\") pod \"csi-hostpathplugin-xc9vh\" (UID: \"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010477 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/574501a5-bb4b-4c42-9046-e00bc9447f56-ready\") pod \"cni-sysctl-allowlist-ds-rxwj8\" (UID: \"574501a5-bb4b-4c42-9046-e00bc9447f56\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010510 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f876ae2-ff59-421f-8f12-b6d980abb001-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-sljmn\" (UID: \"3f876ae2-ff59-421f-8f12-b6d980abb001\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-sljmn" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010536 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2a73f457-25de-4a7a-8b9b-d4fccf4c27fb-profile-collector-cert\") pod \"olm-operator-5cdf44d969-wbzcd\" (UID: \"2a73f457-25de-4a7a-8b9b-d4fccf4c27fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010557 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e349745-eed9-4471-abae-b45e90ce805d-serving-cert\") pod \"kube-apiserver-operator-575994946d-kr8np\" (UID: \"4e349745-eed9-4471-abae-b45e90ce805d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kr8np" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010577 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4e349745-eed9-4471-abae-b45e90ce805d-tmp-dir\") pod \"kube-apiserver-operator-575994946d-kr8np\" (UID: \"4e349745-eed9-4471-abae-b45e90ce805d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kr8np" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010605 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/72043ba9-5052-46eb-8c7c-2e61734cfd17-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qbjbk\" (UID: \"72043ba9-5052-46eb-8c7c-2e61734cfd17\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qbjbk" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010633 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f4twn\" (UniqueName: \"kubernetes.io/projected/97876313-0296-4efa-b7ea-403570a2cd81-kube-api-access-f4twn\") pod \"service-ca-74545575db-wb5jl\" (UID: \"97876313-0296-4efa-b7ea-403570a2cd81\") " pod="openshift-service-ca/service-ca-74545575db-wb5jl" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010753 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2a73f457-25de-4a7a-8b9b-d4fccf4c27fb-tmpfs\") pod \"olm-operator-5cdf44d969-wbzcd\" (UID: \"2a73f457-25de-4a7a-8b9b-d4fccf4c27fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010776 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jlqfv\" (UniqueName: \"kubernetes.io/projected/574501a5-bb4b-4c42-9046-e00bc9447f56-kube-api-access-jlqfv\") pod \"cni-sysctl-allowlist-ds-rxwj8\" (UID: \"574501a5-bb4b-4c42-9046-e00bc9447f56\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010784 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/54da62c3-ab33-49b0-bc8e-27ed0cb9212a-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-wjdlj\" (UID: \"54da62c3-ab33-49b0-bc8e-27ed0cb9212a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.010800 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f876ae2-ff59-421f-8f12-b6d980abb001-config\") pod \"kube-controller-manager-operator-69d5f845f8-sljmn\" (UID: \"3f876ae2-ff59-421f-8f12-b6d980abb001\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-sljmn" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.011625 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f876ae2-ff59-421f-8f12-b6d980abb001-config\") pod \"kube-controller-manager-operator-69d5f845f8-sljmn\" (UID: \"3f876ae2-ff59-421f-8f12-b6d980abb001\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-sljmn" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.011658 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d3ab55-5d06-433f-9c10-5113c2f9f367-config\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.011732 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9f8fbbac-99ac-4a11-9f93-610d12177e71-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-q45w8\" (UID: \"9f8fbbac-99ac-4a11-9f93-610d12177e71\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.011785 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6k622\" (UniqueName: \"kubernetes.io/projected/a6574d02-8035-49ea-8d01-df1b3c1d1433-kube-api-access-6k622\") pod \"service-ca-operator-5b9c976747-57q82\" (UID: \"a6574d02-8035-49ea-8d01-df1b3c1d1433\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-57q82" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.011821 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0b7e81ca-c351-425e-a9e2-ae354f83f8b8-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-z6tr5\" (UID: \"0b7e81ca-c351-425e-a9e2-ae354f83f8b8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.011845 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0b7e81ca-c351-425e-a9e2-ae354f83f8b8-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-z6tr5\" (UID: \"0b7e81ca-c351-425e-a9e2-ae354f83f8b8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.011873 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/90514180-5ed3-4eb5-b13e-cd3b90998a22-stats-auth\") pod \"router-default-68cf44c8b8-vzpzx\" (UID: \"90514180-5ed3-4eb5-b13e-cd3b90998a22\") " pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.011902 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4cffd32-5b39-471d-aacb-44067449bf9a-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-qvvjj\" (UID: \"f4cffd32-5b39-471d-aacb-44067449bf9a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qvvjj" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.011943 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/75d3ab55-5d06-433f-9c10-5113c2f9f367-etcd-ca\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.011971 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kssmv\" (UniqueName: \"kubernetes.io/projected/2a73f457-25de-4a7a-8b9b-d4fccf4c27fb-kube-api-access-kssmv\") pod \"olm-operator-5cdf44d969-wbzcd\" (UID: \"2a73f457-25de-4a7a-8b9b-d4fccf4c27fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.011996 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8q2ks\" (UniqueName: \"kubernetes.io/projected/9f8fbbac-99ac-4a11-9f93-610d12177e71-kube-api-access-8q2ks\") pod \"machine-config-operator-67c9d58cbb-q45w8\" (UID: \"9f8fbbac-99ac-4a11-9f93-610d12177e71\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.012019 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/af286630-dbd3-48df-93d0-52acf80a3a67-tmp-dir\") pod \"dns-operator-799b87ffcd-x84b4\" (UID: \"af286630-dbd3-48df-93d0-52acf80a3a67\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x84b4" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.012049 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/97b35cab-0a8d-4331-8724-cbe640b9e24c-srv-cert\") pod \"catalog-operator-75ff9f647d-gtsl2\" (UID: \"97b35cab-0a8d-4331-8724-cbe640b9e24c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.012077 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/58cb15f0-81cf-46ab-8c99-afa4fd7a67d6-socket-dir\") pod \"csi-hostpathplugin-xc9vh\" (UID: \"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.012100 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/93bc8cd9-3692-4406-8351-3a273fa1d9c8-tmpfs\") pod \"packageserver-7d4fc7d867-trgjl\" (UID: \"93bc8cd9-3692-4406-8351-3a273fa1d9c8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.012125 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a73f457-25de-4a7a-8b9b-d4fccf4c27fb-srv-cert\") pod \"olm-operator-5cdf44d969-wbzcd\" (UID: \"2a73f457-25de-4a7a-8b9b-d4fccf4c27fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.012168 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35446e1e-d728-44f3-b17f-372a50dbcb73-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-wws6k\" (UID: \"35446e1e-d728-44f3-b17f-372a50dbcb73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wws6k" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.012192 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/97876313-0296-4efa-b7ea-403570a2cd81-signing-cabundle\") pod \"service-ca-74545575db-wb5jl\" (UID: \"97876313-0296-4efa-b7ea-403570a2cd81\") " pod="openshift-service-ca/service-ca-74545575db-wb5jl" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.012222 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f76aa800-8554-45e3-ab38-e5b8efd7c3ad-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-nhcw5\" (UID: \"f76aa800-8554-45e3-ab38-e5b8efd7c3ad\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nhcw5" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.012251 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d19f60aa-72cf-4a40-a402-300df68ad28f-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-pkb44\" (UID: \"d19f60aa-72cf-4a40-a402-300df68ad28f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pkb44" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.012279 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6574d02-8035-49ea-8d01-df1b3c1d1433-serving-cert\") pod \"service-ca-operator-5b9c976747-57q82\" (UID: \"a6574d02-8035-49ea-8d01-df1b3c1d1433\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-57q82" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.012282 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/54da62c3-ab33-49b0-bc8e-27ed0cb9212a-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-wjdlj\" (UID: \"54da62c3-ab33-49b0-bc8e-27ed0cb9212a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.012331 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/75d3ab55-5d06-433f-9c10-5113c2f9f367-etcd-service-ca\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.012350 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/13ce33b9-2283-4f53-8400-442c0ee364e5-certs\") pod \"machine-config-server-q8qqz\" (UID: \"13ce33b9-2283-4f53-8400-442c0ee364e5\") " pod="openshift-machine-config-operator/machine-config-server-q8qqz" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.012363 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/35446e1e-d728-44f3-b17f-372a50dbcb73-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-wws6k\" (UID: \"35446e1e-d728-44f3-b17f-372a50dbcb73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wws6k" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.012396 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/90514180-5ed3-4eb5-b13e-cd3b90998a22-default-certificate\") pod \"router-default-68cf44c8b8-vzpzx\" (UID: \"90514180-5ed3-4eb5-b13e-cd3b90998a22\") " pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.012404 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93bc8cd9-3692-4406-8351-3a273fa1d9c8-apiservice-cert\") pod \"packageserver-7d4fc7d867-trgjl\" (UID: \"93bc8cd9-3692-4406-8351-3a273fa1d9c8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.012574 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/13ce33b9-2283-4f53-8400-442c0ee364e5-node-bootstrap-token\") pod \"machine-config-server-q8qqz\" (UID: \"13ce33b9-2283-4f53-8400-442c0ee364e5\") " pod="openshift-machine-config-operator/machine-config-server-q8qqz" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.012646 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0179285f-606e-490f-b531-c95df3483e77-cert\") pod \"ingress-canary-lf9n6\" (UID: \"0179285f-606e-490f-b531-c95df3483e77\") " pod="openshift-ingress-canary/ingress-canary-lf9n6" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.012806 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/35446e1e-d728-44f3-b17f-372a50dbcb73-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-wws6k\" (UID: \"35446e1e-d728-44f3-b17f-372a50dbcb73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wws6k" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.012418 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b309434e-b723-47e5-bce5-30f0c1ca2a1e-config-volume\") pod \"dns-default-gjccc\" (UID: \"b309434e-b723-47e5-bce5-30f0c1ca2a1e\") " pod="openshift-dns/dns-default-gjccc" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.013131 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e349745-eed9-4471-abae-b45e90ce805d-config\") pod \"kube-apiserver-operator-575994946d-kr8np\" (UID: \"4e349745-eed9-4471-abae-b45e90ce805d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kr8np" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.013132 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b309434e-b723-47e5-bce5-30f0c1ca2a1e-config-volume\") pod \"dns-default-gjccc\" (UID: \"b309434e-b723-47e5-bce5-30f0c1ca2a1e\") " pod="openshift-dns/dns-default-gjccc" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.013458 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/75d3ab55-5d06-433f-9c10-5113c2f9f367-etcd-service-ca\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.013627 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/75d3ab55-5d06-433f-9c10-5113c2f9f367-etcd-ca\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.014091 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2a73f457-25de-4a7a-8b9b-d4fccf4c27fb-tmpfs\") pod \"olm-operator-5cdf44d969-wbzcd\" (UID: \"2a73f457-25de-4a7a-8b9b-d4fccf4c27fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.015079 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93bc8cd9-3692-4406-8351-3a273fa1d9c8-webhook-cert\") pod \"packageserver-7d4fc7d867-trgjl\" (UID: \"93bc8cd9-3692-4406-8351-3a273fa1d9c8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.015489 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/93bc8cd9-3692-4406-8351-3a273fa1d9c8-tmpfs\") pod \"packageserver-7d4fc7d867-trgjl\" (UID: \"93bc8cd9-3692-4406-8351-3a273fa1d9c8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.015555 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/97876313-0296-4efa-b7ea-403570a2cd81-signing-key\") pod \"service-ca-74545575db-wb5jl\" (UID: \"97876313-0296-4efa-b7ea-403570a2cd81\") " pod="openshift-service-ca/service-ca-74545575db-wb5jl" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.015594 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/58cb15f0-81cf-46ab-8c99-afa4fd7a67d6-socket-dir\") pod \"csi-hostpathplugin-xc9vh\" (UID: \"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.015890 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sjpbn\" (UniqueName: \"kubernetes.io/projected/b309434e-b723-47e5-bce5-30f0c1ca2a1e-kube-api-access-sjpbn\") pod \"dns-default-gjccc\" (UID: \"b309434e-b723-47e5-bce5-30f0c1ca2a1e\") " pod="openshift-dns/dns-default-gjccc" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.015960 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54da62c3-ab33-49b0-bc8e-27ed0cb9212a-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-wjdlj\" (UID: \"54da62c3-ab33-49b0-bc8e-27ed0cb9212a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.016007 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6r59v\" (UniqueName: \"kubernetes.io/projected/f76aa800-8554-45e3-ab38-e5b8efd7c3ad-kube-api-access-6r59v\") pod \"machine-config-controller-f9cdd68f7-nhcw5\" (UID: \"f76aa800-8554-45e3-ab38-e5b8efd7c3ad\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nhcw5" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.015966 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/af286630-dbd3-48df-93d0-52acf80a3a67-tmp-dir\") pod \"dns-operator-799b87ffcd-x84b4\" (UID: \"af286630-dbd3-48df-93d0-52acf80a3a67\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x84b4" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.016278 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/97b35cab-0a8d-4331-8724-cbe640b9e24c-srv-cert\") pod \"catalog-operator-75ff9f647d-gtsl2\" (UID: \"97b35cab-0a8d-4331-8724-cbe640b9e24c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.016512 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4cffd32-5b39-471d-aacb-44067449bf9a-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-qvvjj\" (UID: \"f4cffd32-5b39-471d-aacb-44067449bf9a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qvvjj" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.017281 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/97876313-0296-4efa-b7ea-403570a2cd81-signing-cabundle\") pod \"service-ca-74545575db-wb5jl\" (UID: \"97876313-0296-4efa-b7ea-403570a2cd81\") " pod="openshift-service-ca/service-ca-74545575db-wb5jl" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.017315 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35446e1e-d728-44f3-b17f-372a50dbcb73-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-wws6k\" (UID: \"35446e1e-d728-44f3-b17f-372a50dbcb73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wws6k" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.017350 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q4xp6\" (UniqueName: \"kubernetes.io/projected/90514180-5ed3-4eb5-b13e-cd3b90998a22-kube-api-access-q4xp6\") pod \"router-default-68cf44c8b8-vzpzx\" (UID: \"90514180-5ed3-4eb5-b13e-cd3b90998a22\") " pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.017368 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab666d86-db2b-4489-a868-8d24159ea775-secret-volume\") pod \"collect-profiles-29420370-s24t5\" (UID: \"ab666d86-db2b-4489-a868-8d24159ea775\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.017396 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/75d3ab55-5d06-433f-9c10-5113c2f9f367-etcd-client\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.017431 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35446e1e-d728-44f3-b17f-372a50dbcb73-config\") pod \"openshift-kube-scheduler-operator-54f497555d-wws6k\" (UID: \"35446e1e-d728-44f3-b17f-372a50dbcb73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wws6k" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.017459 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9f8fbbac-99ac-4a11-9f93-610d12177e71-images\") pod \"machine-config-operator-67c9d58cbb-q45w8\" (UID: \"9f8fbbac-99ac-4a11-9f93-610d12177e71\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.017492 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bl7xg\" (UniqueName: \"kubernetes.io/projected/af286630-dbd3-48df-93d0-52acf80a3a67-kube-api-access-bl7xg\") pod \"dns-operator-799b87ffcd-x84b4\" (UID: \"af286630-dbd3-48df-93d0-52acf80a3a67\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x84b4" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.017503 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2a73f457-25de-4a7a-8b9b-d4fccf4c27fb-profile-collector-cert\") pod \"olm-operator-5cdf44d969-wbzcd\" (UID: \"2a73f457-25de-4a7a-8b9b-d4fccf4c27fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.017529 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2ltdq\" (UniqueName: \"kubernetes.io/projected/72043ba9-5052-46eb-8c7c-2e61734cfd17-kube-api-access-2ltdq\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qbjbk\" (UID: \"72043ba9-5052-46eb-8c7c-2e61734cfd17\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qbjbk" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.017711 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/97b35cab-0a8d-4331-8724-cbe640b9e24c-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-gtsl2\" (UID: \"97b35cab-0a8d-4331-8724-cbe640b9e24c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.017726 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90514180-5ed3-4eb5-b13e-cd3b90998a22-metrics-certs\") pod \"router-default-68cf44c8b8-vzpzx\" (UID: \"90514180-5ed3-4eb5-b13e-cd3b90998a22\") " pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.018347 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b309434e-b723-47e5-bce5-30f0c1ca2a1e-metrics-tls\") pod \"dns-default-gjccc\" (UID: \"b309434e-b723-47e5-bce5-30f0c1ca2a1e\") " pod="openshift-dns/dns-default-gjccc" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.018436 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4e349745-eed9-4471-abae-b45e90ce805d-tmp-dir\") pod \"kube-apiserver-operator-575994946d-kr8np\" (UID: \"4e349745-eed9-4471-abae-b45e90ce805d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kr8np" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.018847 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/58cb15f0-81cf-46ab-8c99-afa4fd7a67d6-mountpoint-dir\") pod \"csi-hostpathplugin-xc9vh\" (UID: \"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.018978 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x5k8h\" (UniqueName: \"kubernetes.io/projected/97b35cab-0a8d-4331-8724-cbe640b9e24c-kube-api-access-x5k8h\") pod \"catalog-operator-75ff9f647d-gtsl2\" (UID: \"97b35cab-0a8d-4331-8724-cbe640b9e24c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.019028 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xj2l8\" (UniqueName: \"kubernetes.io/projected/75d3ab55-5d06-433f-9c10-5113c2f9f367-kube-api-access-xj2l8\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.019119 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35446e1e-d728-44f3-b17f-372a50dbcb73-config\") pod \"openshift-kube-scheduler-operator-54f497555d-wws6k\" (UID: \"35446e1e-d728-44f3-b17f-372a50dbcb73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wws6k" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.019166 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dfnfk\" (UniqueName: \"kubernetes.io/projected/0179285f-606e-490f-b531-c95df3483e77-kube-api-access-dfnfk\") pod \"ingress-canary-lf9n6\" (UID: \"0179285f-606e-490f-b531-c95df3483e77\") " pod="openshift-ingress-canary/ingress-canary-lf9n6" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.019190 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9f8fbbac-99ac-4a11-9f93-610d12177e71-images\") pod \"machine-config-operator-67c9d58cbb-q45w8\" (UID: \"9f8fbbac-99ac-4a11-9f93-610d12177e71\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.019313 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gfx72\" (UniqueName: \"kubernetes.io/projected/d19f60aa-72cf-4a40-a402-300df68ad28f-kube-api-access-gfx72\") pod \"package-server-manager-77f986bd66-pkb44\" (UID: \"d19f60aa-72cf-4a40-a402-300df68ad28f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pkb44" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.019371 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97b35cab-0a8d-4331-8724-cbe640b9e24c-tmpfs\") pod \"catalog-operator-75ff9f647d-gtsl2\" (UID: \"97b35cab-0a8d-4331-8724-cbe640b9e24c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.019400 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/58cb15f0-81cf-46ab-8c99-afa4fd7a67d6-csi-data-dir\") pod \"csi-hostpathplugin-xc9vh\" (UID: \"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.019415 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/943f723e-defa-4cda-914e-964cdf480831-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-8htc9\" (UID: \"943f723e-defa-4cda-914e-964cdf480831\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.019499 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/574501a5-bb4b-4c42-9046-e00bc9447f56-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rxwj8\" (UID: \"574501a5-bb4b-4c42-9046-e00bc9447f56\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.019549 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3a1eebb9-9d59-41be-bf07-445f24f0eb35-webhook-certs\") pod \"multus-admission-controller-69db94689b-r5dqp\" (UID: \"3a1eebb9-9d59-41be-bf07-445f24f0eb35\") " pod="openshift-multus/multus-admission-controller-69db94689b-r5dqp" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.019592 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bstcl\" (UniqueName: \"kubernetes.io/projected/ab666d86-db2b-4489-a868-8d24159ea775-kube-api-access-bstcl\") pod \"collect-profiles-29420370-s24t5\" (UID: \"ab666d86-db2b-4489-a868-8d24159ea775\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.019617 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/54da62c3-ab33-49b0-bc8e-27ed0cb9212a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-wjdlj\" (UID: \"54da62c3-ab33-49b0-bc8e-27ed0cb9212a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.019773 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/90514180-5ed3-4eb5-b13e-cd3b90998a22-default-certificate\") pod \"router-default-68cf44c8b8-vzpzx\" (UID: \"90514180-5ed3-4eb5-b13e-cd3b90998a22\") " pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.020167 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/574501a5-bb4b-4c42-9046-e00bc9447f56-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rxwj8\" (UID: \"574501a5-bb4b-4c42-9046-e00bc9447f56\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.020260 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f76aa800-8554-45e3-ab38-e5b8efd7c3ad-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-nhcw5\" (UID: \"f76aa800-8554-45e3-ab38-e5b8efd7c3ad\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nhcw5" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.020319 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0b7e81ca-c351-425e-a9e2-ae354f83f8b8-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-z6tr5\" (UID: \"0b7e81ca-c351-425e-a9e2-ae354f83f8b8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.020388 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97b35cab-0a8d-4331-8724-cbe640b9e24c-tmpfs\") pod \"catalog-operator-75ff9f647d-gtsl2\" (UID: \"97b35cab-0a8d-4331-8724-cbe640b9e24c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.020570 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f76aa800-8554-45e3-ab38-e5b8efd7c3ad-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-nhcw5\" (UID: \"f76aa800-8554-45e3-ab38-e5b8efd7c3ad\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nhcw5" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.020701 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.022043 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f876ae2-ff59-421f-8f12-b6d980abb001-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-sljmn\" (UID: \"3f876ae2-ff59-421f-8f12-b6d980abb001\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-sljmn" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.022202 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/574501a5-bb4b-4c42-9046-e00bc9447f56-ready\") pod \"cni-sysctl-allowlist-ds-rxwj8\" (UID: \"574501a5-bb4b-4c42-9046-e00bc9447f56\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.022257 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/54da62c3-ab33-49b0-bc8e-27ed0cb9212a-tmp\") pod \"cluster-image-registry-operator-86c45576b9-wjdlj\" (UID: \"54da62c3-ab33-49b0-bc8e-27ed0cb9212a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.024167 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a73f457-25de-4a7a-8b9b-d4fccf4c27fb-srv-cert\") pod \"olm-operator-5cdf44d969-wbzcd\" (UID: \"2a73f457-25de-4a7a-8b9b-d4fccf4c27fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.024307 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/75d3ab55-5d06-433f-9c10-5113c2f9f367-etcd-client\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.024996 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/58cb15f0-81cf-46ab-8c99-afa4fd7a67d6-csi-data-dir\") pod \"csi-hostpathplugin-xc9vh\" (UID: \"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.025615 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3a1eebb9-9d59-41be-bf07-445f24f0eb35-webhook-certs\") pod \"multus-admission-controller-69db94689b-r5dqp\" (UID: \"3a1eebb9-9d59-41be-bf07-445f24f0eb35\") " pod="openshift-multus/multus-admission-controller-69db94689b-r5dqp" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.026173 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0b7e81ca-c351-425e-a9e2-ae354f83f8b8-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-z6tr5\" (UID: \"0b7e81ca-c351-425e-a9e2-ae354f83f8b8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.026330 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b309434e-b723-47e5-bce5-30f0c1ca2a1e-metrics-tls\") pod \"dns-default-gjccc\" (UID: \"b309434e-b723-47e5-bce5-30f0c1ca2a1e\") " pod="openshift-dns/dns-default-gjccc" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.026517 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/90514180-5ed3-4eb5-b13e-cd3b90998a22-stats-auth\") pod \"router-default-68cf44c8b8-vzpzx\" (UID: \"90514180-5ed3-4eb5-b13e-cd3b90998a22\") " pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.026518 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/54da62c3-ab33-49b0-bc8e-27ed0cb9212a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-wjdlj\" (UID: \"54da62c3-ab33-49b0-bc8e-27ed0cb9212a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.026984 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/72043ba9-5052-46eb-8c7c-2e61734cfd17-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qbjbk\" (UID: \"72043ba9-5052-46eb-8c7c-2e61734cfd17\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qbjbk" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.027369 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f76aa800-8554-45e3-ab38-e5b8efd7c3ad-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-nhcw5\" (UID: \"f76aa800-8554-45e3-ab38-e5b8efd7c3ad\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nhcw5" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.027601 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9f8fbbac-99ac-4a11-9f93-610d12177e71-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-q45w8\" (UID: \"9f8fbbac-99ac-4a11-9f93-610d12177e71\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.028614 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0b7e81ca-c351-425e-a9e2-ae354f83f8b8-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-z6tr5\" (UID: \"0b7e81ca-c351-425e-a9e2-ae354f83f8b8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.029588 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d19f60aa-72cf-4a40-a402-300df68ad28f-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-pkb44\" (UID: \"d19f60aa-72cf-4a40-a402-300df68ad28f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pkb44" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.029802 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e349745-eed9-4471-abae-b45e90ce805d-serving-cert\") pod \"kube-apiserver-operator-575994946d-kr8np\" (UID: \"4e349745-eed9-4471-abae-b45e90ce805d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kr8np" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.031062 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75d3ab55-5d06-433f-9c10-5113c2f9f367-serving-cert\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.041519 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.063016 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.081452 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.118223 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.124003 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.124423 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.624405054 +0000 UTC m=+122.917250511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.125294 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.140608 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.162249 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.181651 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.201408 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.221211 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.224830 5118 projected.go:194] Error preparing data for projected volume kube-api-access-qxs6m for pod openshift-machine-api/machine-api-operator-755bb95488-vjsnr: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.224960 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1927987d-1fa4-4b00-b6f0-a7861eb10702-kube-api-access-qxs6m podName:1927987d-1fa4-4b00-b6f0-a7861eb10702 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.724931984 +0000 UTC m=+123.017777661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qxs6m" (UniqueName: "kubernetes.io/projected/1927987d-1fa4-4b00-b6f0-a7861eb10702-kube-api-access-qxs6m") pod "machine-api-operator-755bb95488-vjsnr" (UID: "1927987d-1fa4-4b00-b6f0-a7861eb10702") : failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.225182 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.225374 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.725362785 +0000 UTC m=+123.018208242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.226082 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.226626 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.726597498 +0000 UTC m=+123.019442985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.240020 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.246399 5118 projected.go:194] Error preparing data for projected volume kube-api-access-lk6xl for pod openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.246542 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/04382913-99f0-4bca-abaa-952bbb21e06a-kube-api-access-lk6xl podName:04382913-99f0-4bca-abaa-952bbb21e06a nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.746510118 +0000 UTC m=+123.039355595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lk6xl" (UniqueName: "kubernetes.io/projected/04382913-99f0-4bca-abaa-952bbb21e06a-kube-api-access-lk6xl") pod "authentication-operator-7f5c659b84-bjxx7" (UID: "04382913-99f0-4bca-abaa-952bbb21e06a") : failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.260607 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.268158 5118 projected.go:194] Error preparing data for projected volume kube-api-access-w7zsw for pod openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.268277 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f7c859cf-4198-4549-b24d-d5cc7e650257-kube-api-access-w7zsw podName:f7c859cf-4198-4549-b24d-d5cc7e650257 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.768233078 +0000 UTC m=+123.061078535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w7zsw" (UniqueName: "kubernetes.io/projected/f7c859cf-4198-4549-b24d-d5cc7e650257-kube-api-access-w7zsw") pod "apiserver-8596bd845d-jpdh9" (UID: "f7c859cf-4198-4549-b24d-d5cc7e650257") : failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.280459 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.284405 5118 projected.go:194] Error preparing data for projected volume kube-api-access-nd4vw for pod openshift-apiserver/apiserver-9ddfb9f55-8vsfg: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.284636 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9556f84b-c3ef-4dd1-8483-67e5960385a1-kube-api-access-nd4vw podName:9556f84b-c3ef-4dd1-8483-67e5960385a1 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.784599663 +0000 UTC m=+123.077445120 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nd4vw" (UniqueName: "kubernetes.io/projected/9556f84b-c3ef-4dd1-8483-67e5960385a1-kube-api-access-nd4vw") pod "apiserver-9ddfb9f55-8vsfg" (UID: "9556f84b-c3ef-4dd1-8483-67e5960385a1") : failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.301996 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.305395 5118 projected.go:194] Error preparing data for projected volume kube-api-access-78g6q for pod openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.305489 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d5ad6856-ba98-4f91-b102-7e41020e2ecf-kube-api-access-78g6q podName:d5ad6856-ba98-4f91-b102-7e41020e2ecf nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.80546545 +0000 UTC m=+123.098310907 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-78g6q" (UniqueName: "kubernetes.io/projected/d5ad6856-ba98-4f91-b102-7e41020e2ecf-kube-api-access-78g6q") pod "route-controller-manager-776cdc94d6-575sv" (UID: "d5ad6856-ba98-4f91-b102-7e41020e2ecf") : failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.327708 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.327794 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.827773105 +0000 UTC m=+123.120618562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.328018 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.328357 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.82834786 +0000 UTC m=+123.121193317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.340575 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.353619 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b138de57-89ae-4cf5-8136-433862988df2-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-zmlzt\" (UID: \"b138de57-89ae-4cf5-8136-433862988df2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zmlzt" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.421120 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.429560 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.429753 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.929729952 +0000 UTC m=+123.222575409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.430440 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.430840 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.930821172 +0000 UTC m=+123.223666629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.435457 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97e901dc-7a73-42d4-bbb9-3a7391a79105-serving-cert\") pod \"openshift-config-operator-5777786469-fz5jn\" (UID: \"97e901dc-7a73-42d4-bbb9-3a7391a79105\") " pod="openshift-config-operator/openshift-config-operator-5777786469-fz5jn" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.440534 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.454967 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6dcf4602-a9b9-40b0-af37-2a69edc555f0-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-6xz4z\" (UID: \"6dcf4602-a9b9-40b0-af37-2a69edc555f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6xz4z" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.461895 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.471987 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88131373-e414-436f-83e1-9d4aa4b55f62-config\") pod \"controller-manager-65b6cccf98-kk4vd\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.479979 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.481521 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9ee217f-a422-41dc-99a3-72c1dcb1c3e7-config\") pod \"openshift-apiserver-operator-846cbfc458-47lhr\" (UID: \"e9ee217f-a422-41dc-99a3-72c1dcb1c3e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-47lhr" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.500812 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.504171 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/db584c29-faf0-48cd-ac87-3af21a6fcbe4-console-serving-cert\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.527541 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.530308 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54da62c3-ab33-49b0-bc8e-27ed0cb9212a-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-wjdlj\" (UID: \"54da62c3-ab33-49b0-bc8e-27ed0cb9212a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.532464 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.532523 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a7dc4f4-9762-4968-b509-c2ee68240e9b-trusted-ca\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.532667 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.032633156 +0000 UTC m=+123.325478623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.532989 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.533462 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.033450107 +0000 UTC m=+123.326295564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.556577 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a7dc4f4-9762-4968-b509-c2ee68240e9b-bound-sa-token\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.560827 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.567958 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5a7dc4f4-9762-4968-b509-c2ee68240e9b-registry-tls\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.588181 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.595651 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db584c29-faf0-48cd-ac87-3af21a6fcbe4-trusted-ca-bundle\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.600255 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.606800 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88131373-e414-436f-83e1-9d4aa4b55f62-client-ca\") pod \"controller-manager-65b6cccf98-kk4vd\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.634732 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.634982 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.134947493 +0000 UTC m=+123.427792990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.635362 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.635718 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.135708483 +0000 UTC m=+123.428553940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.639794 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.646289 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/db584c29-faf0-48cd-ac87-3af21a6fcbe4-console-config\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.669391 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.677535 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88131373-e414-436f-83e1-9d4aa4b55f62-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-kk4vd\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.701097 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.708246 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/db584c29-faf0-48cd-ac87-3af21a6fcbe4-service-ca\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.720809 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.732765 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9ee217f-a422-41dc-99a3-72c1dcb1c3e7-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-47lhr\" (UID: \"e9ee217f-a422-41dc-99a3-72c1dcb1c3e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-47lhr" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.737526 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.737747 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qxs6m\" (UniqueName: \"kubernetes.io/projected/1927987d-1fa4-4b00-b6f0-a7861eb10702-kube-api-access-qxs6m\") pod \"machine-api-operator-755bb95488-vjsnr\" (UID: \"1927987d-1fa4-4b00-b6f0-a7861eb10702\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vjsnr" Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.737802 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.237770613 +0000 UTC m=+123.530616100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.738284 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.738607 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.238590426 +0000 UTC m=+123.531435903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.740294 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.741212 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxs6m\" (UniqueName: \"kubernetes.io/projected/1927987d-1fa4-4b00-b6f0-a7861eb10702-kube-api-access-qxs6m\") pod \"machine-api-operator-755bb95488-vjsnr\" (UID: \"1927987d-1fa4-4b00-b6f0-a7861eb10702\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vjsnr" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.751131 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/db584c29-faf0-48cd-ac87-3af21a6fcbe4-console-oauth-config\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.761240 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.769028 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/db584c29-faf0-48cd-ac87-3af21a6fcbe4-oauth-serving-cert\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.797089 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjh9l\" (UniqueName: \"kubernetes.io/projected/5a7dc4f4-9762-4968-b509-c2ee68240e9b-kube-api-access-rjh9l\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.800597 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.812991 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5a7dc4f4-9762-4968-b509-c2ee68240e9b-installation-pull-secrets\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.820992 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.831572 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b138de57-89ae-4cf5-8136-433862988df2-config\") pod \"openshift-controller-manager-operator-686468bdd5-zmlzt\" (UID: \"b138de57-89ae-4cf5-8136-433862988df2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zmlzt" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.839169 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.839361 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.339324441 +0000 UTC m=+123.632169918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.840052 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lk6xl\" (UniqueName: \"kubernetes.io/projected/04382913-99f0-4bca-abaa-952bbb21e06a-kube-api-access-lk6xl\") pod \"authentication-operator-7f5c659b84-bjxx7\" (UID: \"04382913-99f0-4bca-abaa-952bbb21e06a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.840425 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.840452 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-78g6q\" (UniqueName: \"kubernetes.io/projected/d5ad6856-ba98-4f91-b102-7e41020e2ecf-kube-api-access-78g6q\") pod \"route-controller-manager-776cdc94d6-575sv\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.840531 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.840624 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nd4vw\" (UniqueName: \"kubernetes.io/projected/9556f84b-c3ef-4dd1-8483-67e5960385a1-kube-api-access-nd4vw\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.840750 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w7zsw\" (UniqueName: \"kubernetes.io/projected/f7c859cf-4198-4549-b24d-d5cc7e650257-kube-api-access-w7zsw\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.841139 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.341123058 +0000 UTC m=+123.633968565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.845371 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk6xl\" (UniqueName: \"kubernetes.io/projected/04382913-99f0-4bca-abaa-952bbb21e06a-kube-api-access-lk6xl\") pod \"authentication-operator-7f5c659b84-bjxx7\" (UID: \"04382913-99f0-4bca-abaa-952bbb21e06a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.845731 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7zsw\" (UniqueName: \"kubernetes.io/projected/f7c859cf-4198-4549-b24d-d5cc7e650257-kube-api-access-w7zsw\") pod \"apiserver-8596bd845d-jpdh9\" (UID: \"f7c859cf-4198-4549-b24d-d5cc7e650257\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.846215 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd4vw\" (UniqueName: \"kubernetes.io/projected/9556f84b-c3ef-4dd1-8483-67e5960385a1-kube-api-access-nd4vw\") pod \"apiserver-9ddfb9f55-8vsfg\" (UID: \"9556f84b-c3ef-4dd1-8483-67e5960385a1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.846218 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-78g6q\" (UniqueName: \"kubernetes.io/projected/d5ad6856-ba98-4f91-b102-7e41020e2ecf-kube-api-access-78g6q\") pod \"route-controller-manager-776cdc94d6-575sv\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.848778 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88131373-e414-436f-83e1-9d4aa4b55f62-serving-cert\") pod \"controller-manager-65b6cccf98-kk4vd\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.897540 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6mk6\" (UniqueName: \"kubernetes.io/projected/54da62c3-ab33-49b0-bc8e-27ed0cb9212a-kube-api-access-j6mk6\") pod \"cluster-image-registry-operator-86c45576b9-wjdlj\" (UID: \"54da62c3-ab33-49b0-bc8e-27ed0cb9212a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.916253 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf47c\" (UniqueName: \"kubernetes.io/projected/df7bb012-1926-4cf5-97ee-990d99a956b7-kube-api-access-lf47c\") pod \"migrator-866fcbc849-8wzbs\" (UID: \"df7bb012-1926-4cf5-97ee-990d99a956b7\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-8wzbs" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.935748 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf4n4\" (UniqueName: \"kubernetes.io/projected/93bc8cd9-3692-4406-8351-3a273fa1d9c8-kube-api-access-hf4n4\") pod \"packageserver-7d4fc7d867-trgjl\" (UID: \"93bc8cd9-3692-4406-8351-3a273fa1d9c8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.942377 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.943076 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.943225 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.443137648 +0000 UTC m=+123.735983115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.943332 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.943580 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.943768 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.943989 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:10 crc kubenswrapper[5118]: E1208 19:31:10.944573 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.444543045 +0000 UTC m=+123.737388542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.958942 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfnhk\" (UniqueName: \"kubernetes.io/projected/0b7e81ca-c351-425e-a9e2-ae354f83f8b8-kube-api-access-gfnhk\") pod \"ingress-operator-6b9cb4dbcf-z6tr5\" (UID: \"0b7e81ca-c351-425e-a9e2-ae354f83f8b8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.977930 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp9tk\" (UniqueName: \"kubernetes.io/projected/13ce33b9-2283-4f53-8400-442c0ee364e5-kube-api-access-pp9tk\") pod \"machine-config-server-q8qqz\" (UID: \"13ce33b9-2283-4f53-8400-442c0ee364e5\") " pod="openshift-machine-config-operator/machine-config-server-q8qqz" Dec 08 19:31:10 crc kubenswrapper[5118]: I1208 19:31:10.998040 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq6vj\" (UniqueName: \"kubernetes.io/projected/3a1eebb9-9d59-41be-bf07-445f24f0eb35-kube-api-access-fq6vj\") pod \"multus-admission-controller-69db94689b-r5dqp\" (UID: \"3a1eebb9-9d59-41be-bf07-445f24f0eb35\") " pod="openshift-multus/multus-admission-controller-69db94689b-r5dqp" Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.008137 5118 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.008292 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af286630-dbd3-48df-93d0-52acf80a3a67-metrics-tls podName:af286630-dbd3-48df-93d0-52acf80a3a67 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.508256583 +0000 UTC m=+123.801102040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/af286630-dbd3-48df-93d0-52acf80a3a67-metrics-tls") pod "dns-operator-799b87ffcd-x84b4" (UID: "af286630-dbd3-48df-93d0-52acf80a3a67") : failed to sync secret cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.018050 5118 request.go:752] "Waited before sending request" delay="1.008366338s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/serviceaccounts/kube-storage-version-migrator-operator/token" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.019463 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f876ae2-ff59-421f-8f12-b6d980abb001-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-sljmn\" (UID: \"3f876ae2-ff59-421f-8f12-b6d980abb001\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-sljmn" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.040478 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.040545 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r84r\" (UniqueName: \"kubernetes.io/projected/f4cffd32-5b39-471d-aacb-44067449bf9a-kube-api-access-9r84r\") pod \"kube-storage-version-migrator-operator-565b79b866-qvvjj\" (UID: \"f4cffd32-5b39-471d-aacb-44067449bf9a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qvvjj" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.045331 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.045537 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.545513567 +0000 UTC m=+123.838359074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.046134 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs\") pod \"network-metrics-daemon-qmvkf\" (UID: \"b9693139-63f6-471e-ae19-744460a6b114\") " pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.046176 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.046542 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.546531714 +0000 UTC m=+123.839377171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.071104 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qvvjj" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.080488 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p894h\" (UniqueName: \"kubernetes.io/projected/943f723e-defa-4cda-914e-964cdf480831-kube-api-access-p894h\") pod \"marketplace-operator-547dbd544d-8htc9\" (UID: \"943f723e-defa-4cda-914e-964cdf480831\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.085612 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-r5dqp" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.097878 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35446e1e-d728-44f3-b17f-372a50dbcb73-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-wws6k\" (UID: \"35446e1e-d728-44f3-b17f-372a50dbcb73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wws6k" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.102268 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-8wzbs" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.118066 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/54da62c3-ab33-49b0-bc8e-27ed0cb9212a-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-wjdlj\" (UID: \"54da62c3-ab33-49b0-bc8e-27ed0cb9212a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.136120 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.142388 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kssmv\" (UniqueName: \"kubernetes.io/projected/2a73f457-25de-4a7a-8b9b-d4fccf4c27fb-kube-api-access-kssmv\") pod \"olm-operator-5cdf44d969-wbzcd\" (UID: \"2a73f457-25de-4a7a-8b9b-d4fccf4c27fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd" Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.145063 5118 projected.go:289] Couldn't get configMap openshift-console/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.145098 5118 projected.go:194] Error preparing data for projected volume kube-api-access-57h7j for pod openshift-console/downloads-747b44746d-qnl9q: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.145195 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/86f2d26a-630b-4a98-9dc3-c1ec245d7b6b-kube-api-access-57h7j podName:86f2d26a-630b-4a98-9dc3-c1ec245d7b6b nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.645171593 +0000 UTC m=+123.938017050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-57h7j" (UniqueName: "kubernetes.io/projected/86f2d26a-630b-4a98-9dc3-c1ec245d7b6b-kube-api-access-57h7j") pod "downloads-747b44746d-qnl9q" (UID: "86f2d26a-630b-4a98-9dc3-c1ec245d7b6b") : failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.147951 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.148181 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.648127822 +0000 UTC m=+123.940973419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.148710 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.149069 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.649053956 +0000 UTC m=+123.941899413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.163412 5118 projected.go:289] Couldn't get configMap openshift-cluster-machine-approver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.163462 5118 projected.go:194] Error preparing data for projected volume kube-api-access-zff75 for pod openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.163549 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/860d245e-aede-47bd-a8fe-b8bd2f79fd86-kube-api-access-zff75 podName:860d245e-aede-47bd-a8fe-b8bd2f79fd86 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.663522492 +0000 UTC m=+123.956367949 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zff75" (UniqueName: "kubernetes.io/projected/860d245e-aede-47bd-a8fe-b8bd2f79fd86-kube-api-access-zff75") pod "machine-approver-54c688565-zn9cs" (UID: "860d245e-aede-47bd-a8fe-b8bd2f79fd86") : failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.165931 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlqfv\" (UniqueName: \"kubernetes.io/projected/574501a5-bb4b-4c42-9046-e00bc9447f56-kube-api-access-jlqfv\") pod \"cni-sysctl-allowlist-ds-rxwj8\" (UID: \"574501a5-bb4b-4c42-9046-e00bc9447f56\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.171314 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.176505 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4twn\" (UniqueName: \"kubernetes.io/projected/97876313-0296-4efa-b7ea-403570a2cd81-kube-api-access-f4twn\") pod \"service-ca-74545575db-wb5jl\" (UID: \"97876313-0296-4efa-b7ea-403570a2cd81\") " pod="openshift-service-ca/service-ca-74545575db-wb5jl" Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.191074 5118 projected.go:289] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.191126 5118 projected.go:194] Error preparing data for projected volume kube-api-access-8vd57 for pod openshift-console-operator/console-operator-67c89758df-tkctz: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.191200 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e65a45b2-4747-4f30-bbfa-d8a711e702e8-kube-api-access-8vd57 podName:e65a45b2-4747-4f30-bbfa-d8a711e702e8 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.691179729 +0000 UTC m=+123.984025186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8vd57" (UniqueName: "kubernetes.io/projected/e65a45b2-4747-4f30-bbfa-d8a711e702e8-kube-api-access-8vd57") pod "console-operator-67c89758df-tkctz" (UID: "e65a45b2-4747-4f30-bbfa-d8a711e702e8") : failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.204099 5118 projected.go:289] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.204194 5118 projected.go:194] Error preparing data for projected volume kube-api-access-tzx79 for pod openshift-authentication/oauth-openshift-66458b6674-b68tb: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.204337 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00a48e62-fdf7-4d8f-846f-295c3cb4489e-kube-api-access-tzx79 podName:00a48e62-fdf7-4d8f-846f-295c3cb4489e nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.704284928 +0000 UTC m=+123.997130385 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tzx79" (UniqueName: "kubernetes.io/projected/00a48e62-fdf7-4d8f-846f-295c3cb4489e-kube-api-access-tzx79") pod "oauth-openshift-66458b6674-b68tb" (UID: "00a48e62-fdf7-4d8f-846f-295c3cb4489e") : failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.207797 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjpbn\" (UniqueName: \"kubernetes.io/projected/b309434e-b723-47e5-bce5-30f0c1ca2a1e-kube-api-access-sjpbn\") pod \"dns-default-gjccc\" (UID: \"b309434e-b723-47e5-bce5-30f0c1ca2a1e\") " pod="openshift-dns/dns-default-gjccc" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.212214 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gjccc" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.224590 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r59v\" (UniqueName: \"kubernetes.io/projected/f76aa800-8554-45e3-ab38-e5b8efd7c3ad-kube-api-access-6r59v\") pod \"machine-config-controller-f9cdd68f7-nhcw5\" (UID: \"f76aa800-8554-45e3-ab38-e5b8efd7c3ad\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nhcw5" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.225236 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-q8qqz" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.249066 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k622\" (UniqueName: \"kubernetes.io/projected/a6574d02-8035-49ea-8d01-df1b3c1d1433-kube-api-access-6k622\") pod \"service-ca-operator-5b9c976747-57q82\" (UID: \"a6574d02-8035-49ea-8d01-df1b3c1d1433\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-57q82" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.250903 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.251397 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.751369884 +0000 UTC m=+124.044215491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.270672 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q2ks\" (UniqueName: \"kubernetes.io/projected/9f8fbbac-99ac-4a11-9f93-610d12177e71-kube-api-access-8q2ks\") pod \"machine-config-operator-67c9d58cbb-q45w8\" (UID: \"9f8fbbac-99ac-4a11-9f93-610d12177e71\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.304500 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0b7e81ca-c351-425e-a9e2-ae354f83f8b8-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-z6tr5\" (UID: \"0b7e81ca-c351-425e-a9e2-ae354f83f8b8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.304976 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.314186 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-sljmn" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.326672 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.336565 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5" Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.338267 5118 projected.go:289] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.339129 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4xp6\" (UniqueName: \"kubernetes.io/projected/90514180-5ed3-4eb5-b13e-cd3b90998a22-kube-api-access-q4xp6\") pod \"router-default-68cf44c8b8-vzpzx\" (UID: \"90514180-5ed3-4eb5-b13e-cd3b90998a22\") " pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.341957 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wws6k" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.349864 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ltdq\" (UniqueName: \"kubernetes.io/projected/72043ba9-5052-46eb-8c7c-2e61734cfd17-kube-api-access-2ltdq\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qbjbk\" (UID: \"72043ba9-5052-46eb-8c7c-2e61734cfd17\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qbjbk" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.353737 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.354296 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.854269396 +0000 UTC m=+124.147115033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.365765 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tstr\" (UniqueName: \"kubernetes.io/projected/58cb15f0-81cf-46ab-8c99-afa4fd7a67d6-kube-api-access-6tstr\") pod \"csi-hostpathplugin-xc9vh\" (UID: \"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.379415 5118 projected.go:289] Couldn't get configMap openshift-console/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.379485 5118 projected.go:194] Error preparing data for projected volume kube-api-access-jcls6 for pod openshift-console/console-64d44f6ddf-hxwm8: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.379433 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nhcw5" Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.379600 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db584c29-faf0-48cd-ac87-3af21a6fcbe4-kube-api-access-jcls6 podName:db584c29-faf0-48cd-ac87-3af21a6fcbe4 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.8795674 +0000 UTC m=+124.172412857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jcls6" (UniqueName: "kubernetes.io/projected/db584c29-faf0-48cd-ac87-3af21a6fcbe4-kube-api-access-jcls6") pod "console-64d44f6ddf-hxwm8" (UID: "db584c29-faf0-48cd-ac87-3af21a6fcbe4") : failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.391763 5118 projected.go:289] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.392368 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj2l8\" (UniqueName: \"kubernetes.io/projected/75d3ab55-5d06-433f-9c10-5113c2f9f367-kube-api-access-xj2l8\") pod \"etcd-operator-69b85846b6-z6x7v\" (UID: \"75d3ab55-5d06-433f-9c10-5113c2f9f367\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.409711 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qbjbk" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.409764 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5k8h\" (UniqueName: \"kubernetes.io/projected/97b35cab-0a8d-4331-8724-cbe640b9e24c-kube-api-access-x5k8h\") pod \"catalog-operator-75ff9f647d-gtsl2\" (UID: \"97b35cab-0a8d-4331-8724-cbe640b9e24c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2" Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.421441 5118 projected.go:289] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.422060 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.424806 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfnfk\" (UniqueName: \"kubernetes.io/projected/0179285f-606e-490f-b531-c95df3483e77-kube-api-access-dfnfk\") pod \"ingress-canary-lf9n6\" (UID: \"0179285f-606e-490f-b531-c95df3483e77\") " pod="openshift-ingress-canary/ingress-canary-lf9n6" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.444844 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-wb5jl" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.455466 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.455918 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.955898736 +0000 UTC m=+124.248744193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.456679 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfx72\" (UniqueName: \"kubernetes.io/projected/d19f60aa-72cf-4a40-a402-300df68ad28f-kube-api-access-gfx72\") pod \"package-server-manager-77f986bd66-pkb44\" (UID: \"d19f60aa-72cf-4a40-a402-300df68ad28f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pkb44" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.456778 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-57q82" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.465657 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bstcl\" (UniqueName: \"kubernetes.io/projected/ab666d86-db2b-4489-a868-8d24159ea775-kube-api-access-bstcl\") pod \"collect-profiles-29420370-s24t5\" (UID: \"ab666d86-db2b-4489-a868-8d24159ea775\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.468121 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.476901 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e349745-eed9-4471-abae-b45e90ce805d-kube-api-access\") pod \"kube-apiserver-operator-575994946d-kr8np\" (UID: \"4e349745-eed9-4471-abae-b45e90ce805d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kr8np" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.480215 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.480289 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.502149 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.503130 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.519546 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lf9n6" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.520959 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.543907 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.557222 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.557296 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af286630-dbd3-48df-93d0-52acf80a3a67-metrics-tls\") pod \"dns-operator-799b87ffcd-x84b4\" (UID: \"af286630-dbd3-48df-93d0-52acf80a3a67\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x84b4" Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.557826 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.057802182 +0000 UTC m=+124.350647639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.560541 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.564434 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af286630-dbd3-48df-93d0-52acf80a3a67-metrics-tls\") pod \"dns-operator-799b87ffcd-x84b4\" (UID: \"af286630-dbd3-48df-93d0-52acf80a3a67\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x84b4" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.581292 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.584497 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.592931 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kr8np" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.615366 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.620511 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.621001 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.652466 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.662116 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.662450 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.663044 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-57h7j\" (UniqueName: \"kubernetes.io/projected/86f2d26a-630b-4a98-9dc3-c1ec245d7b6b-kube-api-access-57h7j\") pod \"downloads-747b44746d-qnl9q\" (UID: \"86f2d26a-630b-4a98-9dc3-c1ec245d7b6b\") " pod="openshift-console/downloads-747b44746d-qnl9q" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.664784 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-vjsnr" Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.664946 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.164920117 +0000 UTC m=+124.457765694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.682383 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.689677 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-57h7j\" (UniqueName: \"kubernetes.io/projected/86f2d26a-630b-4a98-9dc3-c1ec245d7b6b-kube-api-access-57h7j\") pod \"downloads-747b44746d-qnl9q\" (UID: \"86f2d26a-630b-4a98-9dc3-c1ec245d7b6b\") " pod="openshift-console/downloads-747b44746d-qnl9q" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.698809 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.700748 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.724251 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" event={"ID":"574501a5-bb4b-4c42-9046-e00bc9447f56","Type":"ContainerStarted","Data":"1ebf6179ada0786e0cc843832b9e3e7df51baffd0425df229248586b60c2c903"} Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.734278 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.735104 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pkb44" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.743848 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.759065 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-8htc9"] Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.761561 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-q8qqz" event={"ID":"13ce33b9-2283-4f53-8400-442c0ee364e5","Type":"ContainerStarted","Data":"eb0dd609aabf74e5db92f93fde7e3fdc8f9845faa579ec9b9aad1fdb3ba63985"} Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.761695 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-q8qqz" event={"ID":"13ce33b9-2283-4f53-8400-442c0ee364e5","Type":"ContainerStarted","Data":"2b3839ce5e94474145759e432d9acde9c0db8aca937543412c1c03d8125cbac4"} Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.764020 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.767907 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zff75\" (UniqueName: \"kubernetes.io/projected/860d245e-aede-47bd-a8fe-b8bd2f79fd86-kube-api-access-zff75\") pod \"machine-approver-54c688565-zn9cs\" (UID: \"860d245e-aede-47bd-a8fe-b8bd2f79fd86\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.767982 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.767614 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.768676 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.268655822 +0000 UTC m=+124.561501279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.770306 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tzx79\" (UniqueName: \"kubernetes.io/projected/00a48e62-fdf7-4d8f-846f-295c3cb4489e-kube-api-access-tzx79\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.770365 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8vd57\" (UniqueName: \"kubernetes.io/projected/e65a45b2-4747-4f30-bbfa-d8a711e702e8-kube-api-access-8vd57\") pod \"console-operator-67c89758df-tkctz\" (UID: \"e65a45b2-4747-4f30-bbfa-d8a711e702e8\") " pod="openshift-console-operator/console-operator-67c89758df-tkctz" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.772496 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.773533 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" event={"ID":"90514180-5ed3-4eb5-b13e-cd3b90998a22","Type":"ContainerStarted","Data":"756aa74b591721d2f4713e939bbf5a5c9afb1d7c42a30acd1d373be072c22aee"} Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.780209 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.781167 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vd57\" (UniqueName: \"kubernetes.io/projected/e65a45b2-4747-4f30-bbfa-d8a711e702e8-kube-api-access-8vd57\") pod \"console-operator-67c89758df-tkctz\" (UID: \"e65a45b2-4747-4f30-bbfa-d8a711e702e8\") " pod="openshift-console-operator/console-operator-67c89758df-tkctz" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.784795 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zff75\" (UniqueName: \"kubernetes.io/projected/860d245e-aede-47bd-a8fe-b8bd2f79fd86-kube-api-access-zff75\") pod \"machine-approver-54c688565-zn9cs\" (UID: \"860d245e-aede-47bd-a8fe-b8bd2f79fd86\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.787635 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzx79\" (UniqueName: \"kubernetes.io/projected/00a48e62-fdf7-4d8f-846f-295c3cb4489e-kube-api-access-tzx79\") pod \"oauth-openshift-66458b6674-b68tb\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.792520 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9693139-63f6-471e-ae19-744460a6b114-metrics-certs\") pod \"network-metrics-daemon-qmvkf\" (UID: \"b9693139-63f6-471e-ae19-744460a6b114\") " pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.803333 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.805128 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.821858 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.825981 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.840313 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.844838 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.860700 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:11 crc kubenswrapper[5118]: W1208 19:31:11.870947 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod943f723e_defa_4cda_914e_964cdf480831.slice/crio-d3b1630101051fb79d1add750ac7cc08779f2fc4d3d51ed0158d8212f76727e4 WatchSource:0}: Error finding container d3b1630101051fb79d1add750ac7cc08779f2fc4d3d51ed0158d8212f76727e4: Status 404 returned error can't find the container with id d3b1630101051fb79d1add750ac7cc08779f2fc4d3d51ed0158d8212f76727e4 Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.871202 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.871400 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.37137022 +0000 UTC m=+124.664215677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.871605 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.872286 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.372256623 +0000 UTC m=+124.665102270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.881571 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.890427 5118 projected.go:194] Error preparing data for projected volume kube-api-access-wx9kk for pod openshift-config-operator/openshift-config-operator-5777786469-fz5jn: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.890624 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97e901dc-7a73-42d4-bbb9-3a7391a79105-kube-api-access-wx9kk podName:97e901dc-7a73-42d4-bbb9-3a7391a79105 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.390585592 +0000 UTC m=+124.683431049 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wx9kk" (UniqueName: "kubernetes.io/projected/97e901dc-7a73-42d4-bbb9-3a7391a79105-kube-api-access-wx9kk") pod "openshift-config-operator-5777786469-fz5jn" (UID: "97e901dc-7a73-42d4-bbb9-3a7391a79105") : failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.903147 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.944168 5118 projected.go:194] Error preparing data for projected volume kube-api-access-2jxd9 for pod openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-47lhr: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.944265 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9ee217f-a422-41dc-99a3-72c1dcb1c3e7-kube-api-access-2jxd9 podName:e9ee217f-a422-41dc-99a3-72c1dcb1c3e7 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.444242743 +0000 UTC m=+124.737088200 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2jxd9" (UniqueName: "kubernetes.io/projected/e9ee217f-a422-41dc-99a3-72c1dcb1c3e7-kube-api-access-2jxd9") pod "openshift-apiserver-operator-846cbfc458-47lhr" (UID: "e9ee217f-a422-41dc-99a3-72c1dcb1c3e7") : failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.946830 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.955197 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.962713 5118 projected.go:194] Error preparing data for projected volume kube-api-access-qgz5w for pod openshift-controller-manager/controller-manager-65b6cccf98-kk4vd: failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.962861 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/88131373-e414-436f-83e1-9d4aa4b55f62-kube-api-access-qgz5w podName:88131373-e414-436f-83e1-9d4aa4b55f62 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.462823388 +0000 UTC m=+124.755668845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qgz5w" (UniqueName: "kubernetes.io/projected/88131373-e414-436f-83e1-9d4aa4b55f62-kube-api-access-qgz5w") pod "controller-manager-65b6cccf98-kk4vd" (UID: "88131373-e414-436f-83e1-9d4aa4b55f62") : failed to sync configmap cache: timed out waiting for the condition Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.963909 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fx42\" (UniqueName: \"kubernetes.io/projected/b138de57-89ae-4cf5-8136-433862988df2-kube-api-access-7fx42\") pod \"openshift-controller-manager-operator-686468bdd5-zmlzt\" (UID: \"b138de57-89ae-4cf5-8136-433862988df2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zmlzt" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.967228 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.974892 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.975230 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jcls6\" (UniqueName: \"kubernetes.io/projected/db584c29-faf0-48cd-ac87-3af21a6fcbe4-kube-api-access-jcls6\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:11 crc kubenswrapper[5118]: E1208 19:31:11.975944 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.475899596 +0000 UTC m=+124.768745053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.990772 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.993973 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcls6\" (UniqueName: \"kubernetes.io/projected/db584c29-faf0-48cd-ac87-3af21a6fcbe4-kube-api-access-jcls6\") pod \"console-64d44f6ddf-hxwm8\" (UID: \"db584c29-faf0-48cd-ac87-3af21a6fcbe4\") " pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:11 crc kubenswrapper[5118]: I1208 19:31:11.994404 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hft55\" (UniqueName: \"kubernetes.io/projected/6dcf4602-a9b9-40b0-af37-2a69edc555f0-kube-api-access-hft55\") pod \"cluster-samples-operator-6b564684c8-6xz4z\" (UID: \"6dcf4602-a9b9-40b0-af37-2a69edc555f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6xz4z" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.001275 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.022052 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.024033 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.026564 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.027675 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.042124 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.045645 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.061093 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.065878 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-tkctz" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.077362 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:12 crc kubenswrapper[5118]: E1208 19:31:12.077810 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.577793362 +0000 UTC m=+124.870638819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.079654 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.081070 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-qnl9q" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.141492 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.141833 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qmvkf" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.144075 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.159641 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl7xg\" (UniqueName: \"kubernetes.io/projected/af286630-dbd3-48df-93d0-52acf80a3a67-kube-api-access-bl7xg\") pod \"dns-operator-799b87ffcd-x84b4\" (UID: \"af286630-dbd3-48df-93d0-52acf80a3a67\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x84b4" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.160724 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.169106 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.179583 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-x84b4" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.181013 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:12 crc kubenswrapper[5118]: E1208 19:31:12.181220 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.681191839 +0000 UTC m=+124.974037396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.183204 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.183911 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:12 crc kubenswrapper[5118]: E1208 19:31:12.184982 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.684925968 +0000 UTC m=+124.977771575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.187034 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6xz4z" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.233206 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.243353 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zmlzt" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.262399 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.279023 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.291109 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:12 crc kubenswrapper[5118]: E1208 19:31:12.291557 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.791499749 +0000 UTC m=+125.084345206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.292440 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:12 crc kubenswrapper[5118]: E1208 19:31:12.296327 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.796296126 +0000 UTC m=+125.089141613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.385526 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qvvjj"] Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.394148 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:12 crc kubenswrapper[5118]: E1208 19:31:12.394381 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.89434161 +0000 UTC m=+125.187187077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.394894 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.395080 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wx9kk\" (UniqueName: \"kubernetes.io/projected/97e901dc-7a73-42d4-bbb9-3a7391a79105-kube-api-access-wx9kk\") pod \"openshift-config-operator-5777786469-fz5jn\" (UID: \"97e901dc-7a73-42d4-bbb9-3a7391a79105\") " pod="openshift-config-operator/openshift-config-operator-5777786469-fz5jn" Dec 08 19:31:12 crc kubenswrapper[5118]: E1208 19:31:12.396099 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.896084036 +0000 UTC m=+125.188929493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.404197 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-r5dqp"] Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.412160 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-8wzbs"] Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.413423 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx9kk\" (UniqueName: \"kubernetes.io/projected/97e901dc-7a73-42d4-bbb9-3a7391a79105-kube-api-access-wx9kk\") pod \"openshift-config-operator-5777786469-fz5jn\" (UID: \"97e901dc-7a73-42d4-bbb9-3a7391a79105\") " pod="openshift-config-operator/openshift-config-operator-5777786469-fz5jn" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.496979 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:12 crc kubenswrapper[5118]: E1208 19:31:12.498970 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.998922218 +0000 UTC m=+125.291767685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.500102 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qgz5w\" (UniqueName: \"kubernetes.io/projected/88131373-e414-436f-83e1-9d4aa4b55f62-kube-api-access-qgz5w\") pod \"controller-manager-65b6cccf98-kk4vd\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.500239 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.500298 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2jxd9\" (UniqueName: \"kubernetes.io/projected/e9ee217f-a422-41dc-99a3-72c1dcb1c3e7-kube-api-access-2jxd9\") pod \"openshift-apiserver-operator-846cbfc458-47lhr\" (UID: \"e9ee217f-a422-41dc-99a3-72c1dcb1c3e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-47lhr" Dec 08 19:31:12 crc kubenswrapper[5118]: E1208 19:31:12.501444 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:13.001435365 +0000 UTC m=+125.294280822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.508230 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jxd9\" (UniqueName: \"kubernetes.io/projected/e9ee217f-a422-41dc-99a3-72c1dcb1c3e7-kube-api-access-2jxd9\") pod \"openshift-apiserver-operator-846cbfc458-47lhr\" (UID: \"e9ee217f-a422-41dc-99a3-72c1dcb1c3e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-47lhr" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.508556 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgz5w\" (UniqueName: \"kubernetes.io/projected/88131373-e414-436f-83e1-9d4aa4b55f62-kube-api-access-qgz5w\") pod \"controller-manager-65b6cccf98-kk4vd\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.601399 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:12 crc kubenswrapper[5118]: E1208 19:31:12.601927 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:13.101895732 +0000 UTC m=+125.394741199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.684184 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.689604 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-fz5jn" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.707453 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:12 crc kubenswrapper[5118]: E1208 19:31:12.707865 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:13.207849537 +0000 UTC m=+125.500694994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.739019 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.747610 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.764072 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.777863 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-47lhr" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.812381 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:12 crc kubenswrapper[5118]: E1208 19:31:12.812744 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:13.312719412 +0000 UTC m=+125.605564869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.827308 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl"] Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.835192 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" event={"ID":"90514180-5ed3-4eb5-b13e-cd3b90998a22","Type":"ContainerStarted","Data":"16ce4fbcd2bdabf3baaf0f0867678207a1a90863e3816f564034a7d1efe22220"} Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.846411 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs" event={"ID":"860d245e-aede-47bd-a8fe-b8bd2f79fd86","Type":"ContainerStarted","Data":"cd83700509747706ca5ed1508e2e683a1ab5579275aa86489aadf50b045ff5b0"} Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.855853 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qvvjj" event={"ID":"f4cffd32-5b39-471d-aacb-44067449bf9a","Type":"ContainerStarted","Data":"4d969ebd4331284a79aee6030dee600b21f021068a85263a2cadfa62ae3b66a2"} Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.858769 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gjccc"] Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.864643 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" event={"ID":"574501a5-bb4b-4c42-9046-e00bc9447f56","Type":"ContainerStarted","Data":"c033c7b04383d901acf213784b3fe0ae6796ec0b77af27e121920822a889e388"} Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.865288 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.867745 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" event={"ID":"943f723e-defa-4cda-914e-964cdf480831","Type":"ContainerStarted","Data":"394573cbf46d7ceae941f13bea721343cdfbedf5b15ddddcc4c68e0017a9dcf6"} Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.867822 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" event={"ID":"943f723e-defa-4cda-914e-964cdf480831","Type":"ContainerStarted","Data":"d3b1630101051fb79d1add750ac7cc08779f2fc4d3d51ed0158d8212f76727e4"} Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.869076 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.874526 5118 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-8htc9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.874623 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" podUID="943f723e-defa-4cda-914e-964cdf480831" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.886203 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-8wzbs" event={"ID":"df7bb012-1926-4cf5-97ee-990d99a956b7","Type":"ContainerStarted","Data":"ce7be678677c4841ca61457fe808a16c3c2c378ee41321e0f35c9d6b1e943317"} Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.888334 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-r5dqp" event={"ID":"3a1eebb9-9d59-41be-bf07-445f24f0eb35","Type":"ContainerStarted","Data":"2c17b6111d23964828333132c0e68b904b4f7bb4527d10fcd43ced2ca40e6ae4"} Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.915033 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-q8qqz" podStartSLOduration=6.915012379 podStartE2EDuration="6.915012379s" podCreationTimestamp="2025-12-08 19:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:12.91356447 +0000 UTC m=+125.206409927" watchObservedRunningTime="2025-12-08 19:31:12.915012379 +0000 UTC m=+125.207857836" Dec 08 19:31:12 crc kubenswrapper[5118]: I1208 19:31:12.917584 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:12 crc kubenswrapper[5118]: E1208 19:31:12.917961 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:13.417947377 +0000 UTC m=+125.710792834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.019864 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:13 crc kubenswrapper[5118]: E1208 19:31:13.023024 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:13.522999987 +0000 UTC m=+125.815845444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.123577 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:13 crc kubenswrapper[5118]: E1208 19:31:13.124265 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:13.624238216 +0000 UTC m=+125.917083673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.225602 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:13 crc kubenswrapper[5118]: E1208 19:31:13.226059 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:13.726025209 +0000 UTC m=+126.018870666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.226151 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:13 crc kubenswrapper[5118]: E1208 19:31:13.226635 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:13.726619475 +0000 UTC m=+126.019464922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.346072 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:13 crc kubenswrapper[5118]: E1208 19:31:13.346650 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:13.846625623 +0000 UTC m=+126.139471080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.447357 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:13 crc kubenswrapper[5118]: E1208 19:31:13.447912 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:13.947895183 +0000 UTC m=+126.240740640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.552671 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:13 crc kubenswrapper[5118]: E1208 19:31:13.553460 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:14.053424876 +0000 UTC m=+126.346270333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.629477 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.651874 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzpzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:13 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 19:31:13 crc kubenswrapper[5118]: [+]process-running ok Dec 08 19:31:13 crc kubenswrapper[5118]: healthz check failed Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.651990 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podUID="90514180-5ed3-4eb5-b13e-cd3b90998a22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.657057 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:13 crc kubenswrapper[5118]: E1208 19:31:13.658630 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:14.158604859 +0000 UTC m=+126.451450316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.701769 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" podStartSLOduration=7.70174913 podStartE2EDuration="7.70174913s" podCreationTimestamp="2025-12-08 19:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:13.700608789 +0000 UTC m=+125.993454246" watchObservedRunningTime="2025-12-08 19:31:13.70174913 +0000 UTC m=+125.994594587" Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.757761 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:13 crc kubenswrapper[5118]: E1208 19:31:13.758110 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:14.258091941 +0000 UTC m=+126.550937398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.780494 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" podStartSLOduration=105.780472678 podStartE2EDuration="1m45.780472678s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:13.728985105 +0000 UTC m=+126.021830562" watchObservedRunningTime="2025-12-08 19:31:13.780472678 +0000 UTC m=+126.073318135" Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.781954 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podStartSLOduration=105.781946937 podStartE2EDuration="1m45.781946937s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:13.772707981 +0000 UTC m=+126.065553458" watchObservedRunningTime="2025-12-08 19:31:13.781946937 +0000 UTC m=+126.074792394" Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.859169 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:13 crc kubenswrapper[5118]: E1208 19:31:13.859485 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:14.359471504 +0000 UTC m=+126.652316961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.906015 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gjccc" event={"ID":"b309434e-b723-47e5-bce5-30f0c1ca2a1e","Type":"ContainerStarted","Data":"30d2457329388ac91ce370dfc18727a7978988e1c5b6d9f58f93d02b5d0ea37f"} Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.916806 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-8wzbs" event={"ID":"df7bb012-1926-4cf5-97ee-990d99a956b7","Type":"ContainerStarted","Data":"a9bb388d324f371b67968e4ed9f1981e157cdc993b0032f0e26d6198d4413a65"} Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.916859 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-8wzbs" event={"ID":"df7bb012-1926-4cf5-97ee-990d99a956b7","Type":"ContainerStarted","Data":"5a0e1fbaa5ba7d0827668d74974a5d8c7c435665024e8df6800e9353f8bc6b01"} Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.933048 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-r5dqp" event={"ID":"3a1eebb9-9d59-41be-bf07-445f24f0eb35","Type":"ContainerStarted","Data":"a21fb80d94c0a8728a788c3dc76ec1d874ec87406da5b8d6408bdfd8b02d557c"} Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.940279 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj"] Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.943362 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl" event={"ID":"93bc8cd9-3692-4406-8351-3a273fa1d9c8","Type":"ContainerStarted","Data":"3f00be6fe57f322d15e52276dcceb4e6a82eed5efce2148fcf9dcce0f8aecd6c"} Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.943658 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl" event={"ID":"93bc8cd9-3692-4406-8351-3a273fa1d9c8","Type":"ContainerStarted","Data":"e94f3b4e4b47ffee0b4abb397f8fc574ca5dc1a77b49376ad1f853e72bf7551f"} Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.944121 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl" Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.945875 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs" event={"ID":"860d245e-aede-47bd-a8fe-b8bd2f79fd86","Type":"ContainerStarted","Data":"2166a06f7c25c15bb4a87da6912e39cd49d4f569552672d2bb6d9a051037f649"} Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.945908 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs" event={"ID":"860d245e-aede-47bd-a8fe-b8bd2f79fd86","Type":"ContainerStarted","Data":"464ab96ef3b414eda445f4d69171e2a293e172f81083cbba8abccacc484b1a04"} Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.946015 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-8wzbs" podStartSLOduration=105.94599258 podStartE2EDuration="1m45.94599258s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:13.936579229 +0000 UTC m=+126.229424686" watchObservedRunningTime="2025-12-08 19:31:13.94599258 +0000 UTC m=+126.238838037" Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.951277 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qvvjj" event={"ID":"f4cffd32-5b39-471d-aacb-44067449bf9a","Type":"ContainerStarted","Data":"afa609515cd586172bac9408305d67c7c73d7b2faa4779930252785a2f1e498e"} Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.959158 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.963938 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-57q82"] Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.965165 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:13 crc kubenswrapper[5118]: E1208 19:31:13.965509 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:14.46549412 +0000 UTC m=+126.758339577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.965522 5118 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-trgjl container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.965596 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl" podUID="93bc8cd9-3692-4406-8351-3a273fa1d9c8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.967570 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8"] Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.970585 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5"] Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.981168 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl" podStartSLOduration=105.981144988 podStartE2EDuration="1m45.981144988s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:13.977940781 +0000 UTC m=+126.270786238" watchObservedRunningTime="2025-12-08 19:31:13.981144988 +0000 UTC m=+126.273990445" Dec 08 19:31:13 crc kubenswrapper[5118]: W1208 19:31:13.982076 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54da62c3_ab33_49b0_bc8e_27ed0cb9212a.slice/crio-51dcafbf2fd6f94d178ceaf567cae7480b7c28712fab4b986ab6c0a9bf0331ca WatchSource:0}: Error finding container 51dcafbf2fd6f94d178ceaf567cae7480b7c28712fab4b986ab6c0a9bf0331ca: Status 404 returned error can't find the container with id 51dcafbf2fd6f94d178ceaf567cae7480b7c28712fab4b986ab6c0a9bf0331ca Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.996331 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qvvjj" podStartSLOduration=105.996314351 podStartE2EDuration="1m45.996314351s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:13.994314398 +0000 UTC m=+126.287159865" watchObservedRunningTime="2025-12-08 19:31:13.996314351 +0000 UTC m=+126.289159798" Dec 08 19:31:13 crc kubenswrapper[5118]: I1208 19:31:13.998054 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.036982 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nhcw5"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.040777 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-zn9cs" podStartSLOduration=107.040745566 podStartE2EDuration="1m47.040745566s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:14.036329618 +0000 UTC m=+126.329175075" watchObservedRunningTime="2025-12-08 19:31:14.040745566 +0000 UTC m=+126.333591023" Dec 08 19:31:14 crc kubenswrapper[5118]: W1208 19:31:14.045806 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6574d02_8035_49ea_8d01_df1b3c1d1433.slice/crio-226baf9af1066b6e7e3379f9b914c9c2ecc98f6c1e10817bb1ed0cbf4e6d5955 WatchSource:0}: Error finding container 226baf9af1066b6e7e3379f9b914c9c2ecc98f6c1e10817bb1ed0cbf4e6d5955: Status 404 returned error can't find the container with id 226baf9af1066b6e7e3379f9b914c9c2ecc98f6c1e10817bb1ed0cbf4e6d5955 Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.067263 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.068538 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5"] Dec 08 19:31:14 crc kubenswrapper[5118]: E1208 19:31:14.069634 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:14.569614006 +0000 UTC m=+126.862459673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.137315 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wws6k"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.137373 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-sljmn"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.170250 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:14 crc kubenswrapper[5118]: E1208 19:31:14.170615 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:14.670593407 +0000 UTC m=+126.963438864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.170671 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:14 crc kubenswrapper[5118]: E1208 19:31:14.171003 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:14.670994627 +0000 UTC m=+126.963840084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:14 crc kubenswrapper[5118]: W1208 19:31:14.171990 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf76aa800_8554_45e3_ab38_e5b8efd7c3ad.slice/crio-3f0a490b3494732b3119b5e0391059c7a77f56507e5f323457bc8c1c92a2dae1 WatchSource:0}: Error finding container 3f0a490b3494732b3119b5e0391059c7a77f56507e5f323457bc8c1c92a2dae1: Status 404 returned error can't find the container with id 3f0a490b3494732b3119b5e0391059c7a77f56507e5f323457bc8c1c92a2dae1 Dec 08 19:31:14 crc kubenswrapper[5118]: W1208 19:31:14.189757 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f876ae2_ff59_421f_8f12_b6d980abb001.slice/crio-24c6d29b0dd8ebb4557d12cd7c461eb9d7e2d75e1a793221f24423f3567989c1 WatchSource:0}: Error finding container 24c6d29b0dd8ebb4557d12cd7c461eb9d7e2d75e1a793221f24423f3567989c1: Status 404 returned error can't find the container with id 24c6d29b0dd8ebb4557d12cd7c461eb9d7e2d75e1a793221f24423f3567989c1 Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.256657 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qbjbk"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.256755 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lf9n6"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.266893 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.273543 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:14 crc kubenswrapper[5118]: E1208 19:31:14.273597 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:14.773560731 +0000 UTC m=+127.066406188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.274058 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:14 crc kubenswrapper[5118]: E1208 19:31:14.274497 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:14.774488226 +0000 UTC m=+127.067333683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:14 crc kubenswrapper[5118]: W1208 19:31:14.280754 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a73f457_25de_4a7a_8b9b_d4fccf4c27fb.slice/crio-cd263290b9061e5f4b8330d06b6649b19adbfc63ee390b0d6c0427dc402efadb WatchSource:0}: Error finding container cd263290b9061e5f4b8330d06b6649b19adbfc63ee390b0d6c0427dc402efadb: Status 404 returned error can't find the container with id cd263290b9061e5f4b8330d06b6649b19adbfc63ee390b0d6c0427dc402efadb Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.375791 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:14 crc kubenswrapper[5118]: E1208 19:31:14.376032 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:14.875981132 +0000 UTC m=+127.168826589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.376627 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:14 crc kubenswrapper[5118]: E1208 19:31:14.377271 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:14.877249255 +0000 UTC m=+127.170094712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.381430 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pkb44"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.382040 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-vjsnr"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.398463 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.425333 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kr8np"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.434901 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.442232 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.445157 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xc9vh"] Dec 08 19:31:14 crc kubenswrapper[5118]: W1208 19:31:14.445276 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1927987d_1fa4_4b00_b6f0_a7861eb10702.slice/crio-f1a762fe9afe5f0cb5bbe6833d0ac2a71568ded4d8d21d7a79067bfdd9ac5ae2 WatchSource:0}: Error finding container f1a762fe9afe5f0cb5bbe6833d0ac2a71568ded4d8d21d7a79067bfdd9ac5ae2: Status 404 returned error can't find the container with id f1a762fe9afe5f0cb5bbe6833d0ac2a71568ded4d8d21d7a79067bfdd9ac5ae2 Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.446314 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.449512 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-wb5jl"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.450812 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-8vsfg"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.452228 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-x84b4"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.460225 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.461398 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-b68tb"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.463361 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-47lhr"] Dec 08 19:31:14 crc kubenswrapper[5118]: W1208 19:31:14.485801 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97b35cab_0a8d_4331_8724_cbe640b9e24c.slice/crio-41d0d2272a135903c20d29a2f68849e6d3443700998cce62a1cd7d2bc0529932 WatchSource:0}: Error finding container 41d0d2272a135903c20d29a2f68849e6d3443700998cce62a1cd7d2bc0529932: Status 404 returned error can't find the container with id 41d0d2272a135903c20d29a2f68849e6d3443700998cce62a1cd7d2bc0529932 Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.486458 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:14 crc kubenswrapper[5118]: E1208 19:31:14.486881 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:14.986850257 +0000 UTC m=+127.279695714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.529017 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rxwj8"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.588196 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:14 crc kubenswrapper[5118]: E1208 19:31:14.588633 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:15.088612919 +0000 UTC m=+127.381458366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.606626 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-kk4vd"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.629070 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzpzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:14 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 19:31:14 crc kubenswrapper[5118]: [+]process-running ok Dec 08 19:31:14 crc kubenswrapper[5118]: healthz check failed Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.629159 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podUID="90514180-5ed3-4eb5-b13e-cd3b90998a22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.632703 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-fz5jn"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.663935 5118 ???:1] "http: TLS handshake error from 192.168.126.11:40438: no serving certificate available for the kubelet" Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.689564 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:14 crc kubenswrapper[5118]: E1208 19:31:14.689849 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:15.189831588 +0000 UTC m=+127.482677045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:14 crc kubenswrapper[5118]: W1208 19:31:14.712366 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88131373_e414_436f_83e1_9d4aa4b55f62.slice/crio-e9c3dd7f772d0be447fca63666f164bc76da266d44dcae52e31e059a50659a1a WatchSource:0}: Error finding container e9c3dd7f772d0be447fca63666f164bc76da266d44dcae52e31e059a50659a1a: Status 404 returned error can't find the container with id e9c3dd7f772d0be447fca63666f164bc76da266d44dcae52e31e059a50659a1a Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.718198 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zmlzt"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.725421 5118 ???:1] "http: TLS handshake error from 192.168.126.11:40440: no serving certificate available for the kubelet" Dec 08 19:31:14 crc kubenswrapper[5118]: W1208 19:31:14.738669 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97e901dc_7a73_42d4_bbb9_3a7391a79105.slice/crio-348ef3d333b57e9d519f40fd90a05f838b98db2573624ad35aba0bc86ebc9a80 WatchSource:0}: Error finding container 348ef3d333b57e9d519f40fd90a05f838b98db2573624ad35aba0bc86ebc9a80: Status 404 returned error can't find the container with id 348ef3d333b57e9d519f40fd90a05f838b98db2573624ad35aba0bc86ebc9a80 Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.790650 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:14 crc kubenswrapper[5118]: E1208 19:31:14.791084 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:15.291067566 +0000 UTC m=+127.583913023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.792221 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-tkctz"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.802561 5118 ???:1] "http: TLS handshake error from 192.168.126.11:40448: no serving certificate available for the kubelet" Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.808960 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qmvkf"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.840482 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-hxwm8"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.849926 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-qnl9q"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.870574 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6xz4z"] Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.892580 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:14 crc kubenswrapper[5118]: E1208 19:31:14.892857 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:15.392809358 +0000 UTC m=+127.685654825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.893012 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:14 crc kubenswrapper[5118]: E1208 19:31:14.893861 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:15.393849565 +0000 UTC m=+127.686695022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.907190 5118 ???:1] "http: TLS handshake error from 192.168.126.11:40456: no serving certificate available for the kubelet" Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.971410 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pkb44" event={"ID":"d19f60aa-72cf-4a40-a402-300df68ad28f","Type":"ContainerStarted","Data":"c38bd0d55bae43c0c4fa1861df38dce2ee99741044aeabbe7dfc8103835082bc"} Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.976817 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"f7beba02ebff8ad4c9709686d7757d5e15073aa2fa1cd0093f1b9cb493cf1b14"} Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.979154 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-sljmn" event={"ID":"3f876ae2-ff59-421f-8f12-b6d980abb001","Type":"ContainerStarted","Data":"24c6d29b0dd8ebb4557d12cd7c461eb9d7e2d75e1a793221f24423f3567989c1"} Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.989289 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8" event={"ID":"9f8fbbac-99ac-4a11-9f93-610d12177e71","Type":"ContainerStarted","Data":"ea45b3a71ceb58083c77da338b4f63ea4644e7a120b083d1a5f4f416082b684f"} Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.989388 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8" event={"ID":"9f8fbbac-99ac-4a11-9f93-610d12177e71","Type":"ContainerStarted","Data":"6a361e41dc28e4938792574d63b1095fa629bdf97453dd96dbdc588e985edad0"} Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.994589 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.994747 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" event={"ID":"04382913-99f0-4bca-abaa-952bbb21e06a","Type":"ContainerStarted","Data":"8b6f5828f0fdb892e8fd5fdecce96e48b9c3c1c5aa7e4a09482d672986d7948d"} Dec 08 19:31:14 crc kubenswrapper[5118]: E1208 19:31:14.995031 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:15.494996002 +0000 UTC m=+127.787841459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:14 crc kubenswrapper[5118]: I1208 19:31:14.995769 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:14 crc kubenswrapper[5118]: E1208 19:31:14.996326 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:15.496302867 +0000 UTC m=+127.789148484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.001960 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" event={"ID":"9556f84b-c3ef-4dd1-8483-67e5960385a1","Type":"ContainerStarted","Data":"42ecab08027ad597df3ef2893d33565b4cfe02e2962230231a9829a0ba8792eb"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.005761 5118 ???:1] "http: TLS handshake error from 192.168.126.11:40470: no serving certificate available for the kubelet" Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.006178 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kr8np" event={"ID":"4e349745-eed9-4471-abae-b45e90ce805d","Type":"ContainerStarted","Data":"e1651577929485ee906e1373df0b95de287f5184064e048143417905692c0b81"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.011774 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" event={"ID":"f7c859cf-4198-4549-b24d-d5cc7e650257","Type":"ContainerStarted","Data":"7e8fd1cc4ae740d0bac6b9c1f7d28a800fd8ca5aa0f834bdf045c3ab263b0f9a"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.019403 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lf9n6" event={"ID":"0179285f-606e-490f-b531-c95df3483e77","Type":"ContainerStarted","Data":"2fa0cebf661824d81c857edef171a4c00201c6f74df27502cee9c63399b54d4e"} Dec 08 19:31:15 crc kubenswrapper[5118]: W1208 19:31:15.022701 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9693139_63f6_471e_ae19_744460a6b114.slice/crio-23e11cba439d7c4c586884f574cc8d5e1de03222c670354d72689e0682e8f7da WatchSource:0}: Error finding container 23e11cba439d7c4c586884f574cc8d5e1de03222c670354d72689e0682e8f7da: Status 404 returned error can't find the container with id 23e11cba439d7c4c586884f574cc8d5e1de03222c670354d72689e0682e8f7da Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.023013 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gjccc" event={"ID":"b309434e-b723-47e5-bce5-30f0c1ca2a1e","Type":"ContainerStarted","Data":"5146c8c4800a4a01c41400a3b66e51e05cc5d00f532684776d3738d7fe32d69d"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.023054 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gjccc" event={"ID":"b309434e-b723-47e5-bce5-30f0c1ca2a1e","Type":"ContainerStarted","Data":"3f8e2e75b970c27cced1271096cfb2565c4c713aaf09b33608d2f0af70bec4e6"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.026199 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5" event={"ID":"0b7e81ca-c351-425e-a9e2-ae354f83f8b8","Type":"ContainerStarted","Data":"be5931cb7be1594096ba04bf6946bfd79a0e8ed5aeb591674e52ebf76fecb94c"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.030802 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-tkctz" event={"ID":"e65a45b2-4747-4f30-bbfa-d8a711e702e8","Type":"ContainerStarted","Data":"285827e7d8aa163e9455aa16ca8fcd48e362f014222bda47a39ba96326231743"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.031960 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-gjccc" Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.070099 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-vjsnr" event={"ID":"1927987d-1fa4-4b00-b6f0-a7861eb10702","Type":"ContainerStarted","Data":"f1a762fe9afe5f0cb5bbe6833d0ac2a71568ded4d8d21d7a79067bfdd9ac5ae2"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.075954 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" event={"ID":"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6","Type":"ContainerStarted","Data":"4808d043c34253c8cecd174d09ed651dab59b6f7ba3faf828b3411de5c70991b"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.085209 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5" event={"ID":"ab666d86-db2b-4489-a868-8d24159ea775","Type":"ContainerStarted","Data":"b1dd1e6529e6d884f5849ddd6bdb2a77cfb55582e9b77db3cb34a1e97de8eb98"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.096553 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:15 crc kubenswrapper[5118]: E1208 19:31:15.096837 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:15.596820046 +0000 UTC m=+127.889665503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.117219 5118 ???:1] "http: TLS handshake error from 192.168.126.11:40478: no serving certificate available for the kubelet" Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.134368 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-r5dqp" event={"ID":"3a1eebb9-9d59-41be-bf07-445f24f0eb35","Type":"ContainerStarted","Data":"a82aac9cceec590bb800c8f923964f8e76e90ff35262c2085f1e6c051a0a7354"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.154475 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-gjccc" podStartSLOduration=9.154443562 podStartE2EDuration="9.154443562s" podCreationTimestamp="2025-12-08 19:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:15.078813446 +0000 UTC m=+127.371658913" watchObservedRunningTime="2025-12-08 19:31:15.154443562 +0000 UTC m=+127.447289019" Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.156562 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5" podStartSLOduration=75.156548648 podStartE2EDuration="1m15.156548648s" podCreationTimestamp="2025-12-08 19:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:15.144638671 +0000 UTC m=+127.437484118" watchObservedRunningTime="2025-12-08 19:31:15.156548648 +0000 UTC m=+127.449394105" Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.186820 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-57q82" event={"ID":"a6574d02-8035-49ea-8d01-df1b3c1d1433","Type":"ContainerStarted","Data":"4ce167cf1b05444118329ba3806f67656fdc7d1f060346bdb174fb92a3cd74be"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.186897 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-57q82" event={"ID":"a6574d02-8035-49ea-8d01-df1b3c1d1433","Type":"ContainerStarted","Data":"226baf9af1066b6e7e3379f9b914c9c2ecc98f6c1e10817bb1ed0cbf4e6d5955"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.204503 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-x84b4" event={"ID":"af286630-dbd3-48df-93d0-52acf80a3a67","Type":"ContainerStarted","Data":"7348003d503522c0f133abe6b07aeb05d59363620f4c2e65e1b1a1c1d06bfc7c"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.205482 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:15 crc kubenswrapper[5118]: E1208 19:31:15.205998 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:15.705980905 +0000 UTC m=+127.998826362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.216247 5118 ???:1] "http: TLS handshake error from 192.168.126.11:40490: no serving certificate available for the kubelet" Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.217533 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-wb5jl" event={"ID":"97876313-0296-4efa-b7ea-403570a2cd81","Type":"ContainerStarted","Data":"52415c4ba897379a4c7cf650583dc10021dbef45ce322c433d34a07362addcf9"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.237154 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd" event={"ID":"2a73f457-25de-4a7a-8b9b-d4fccf4c27fb","Type":"ContainerStarted","Data":"cd263290b9061e5f4b8330d06b6649b19adbfc63ee390b0d6c0427dc402efadb"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.249141 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-r5dqp" podStartSLOduration=107.249091175 podStartE2EDuration="1m47.249091175s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:15.203862489 +0000 UTC m=+127.496707966" watchObservedRunningTime="2025-12-08 19:31:15.249091175 +0000 UTC m=+127.541936632" Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.251423 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-57q82" podStartSLOduration=107.251406497 podStartE2EDuration="1m47.251406497s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:15.249274479 +0000 UTC m=+127.542119956" watchObservedRunningTime="2025-12-08 19:31:15.251406497 +0000 UTC m=+127.544251954" Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.259212 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nhcw5" event={"ID":"f76aa800-8554-45e3-ab38-e5b8efd7c3ad","Type":"ContainerStarted","Data":"3f0a490b3494732b3119b5e0391059c7a77f56507e5f323457bc8c1c92a2dae1"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.267752 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zmlzt" event={"ID":"b138de57-89ae-4cf5-8136-433862988df2","Type":"ContainerStarted","Data":"e96653d02643f887fa2cf535721f375d7acfe1316033321a3a2dce214fbc4f39"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.283870 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wws6k" event={"ID":"35446e1e-d728-44f3-b17f-372a50dbcb73","Type":"ContainerStarted","Data":"27a46496abd140904e38ebbc150625e5a86e7863776668bcd12eea10ae1a24f9"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.312324 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" event={"ID":"54da62c3-ab33-49b0-bc8e-27ed0cb9212a","Type":"ContainerStarted","Data":"12277e94fe8a40d9dfcb65665beb02b6dd9c0176bdee4c5058920314cc1961bd"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.312396 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" event={"ID":"54da62c3-ab33-49b0-bc8e-27ed0cb9212a","Type":"ContainerStarted","Data":"51dcafbf2fd6f94d178ceaf567cae7480b7c28712fab4b986ab6c0a9bf0331ca"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.313189 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:15 crc kubenswrapper[5118]: E1208 19:31:15.313672 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:15.813648975 +0000 UTC m=+128.106494542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.323861 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-fz5jn" event={"ID":"97e901dc-7a73-42d4-bbb9-3a7391a79105","Type":"ContainerStarted","Data":"348ef3d333b57e9d519f40fd90a05f838b98db2573624ad35aba0bc86ebc9a80"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.327478 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" event={"ID":"d5ad6856-ba98-4f91-b102-7e41020e2ecf","Type":"ContainerStarted","Data":"5c6166455162962e418f51aacf38cd16ec252eb4b0379d6c660c0fbb98d44618"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.330298 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-47lhr" event={"ID":"e9ee217f-a422-41dc-99a3-72c1dcb1c3e7","Type":"ContainerStarted","Data":"7af4fe690d068929e111ae489a9cd298b05e6f2f1d6db1bd5d4cc471f9aedbc1"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.331486 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"9e9dd82c89472c69975f39c9ca6f13980bcb78f4d681bdfda2f11658820b5d3d"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.337993 5118 ???:1] "http: TLS handshake error from 192.168.126.11:40494: no serving certificate available for the kubelet" Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.344236 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" event={"ID":"88131373-e414-436f-83e1-9d4aa4b55f62","Type":"ContainerStarted","Data":"e9c3dd7f772d0be447fca63666f164bc76da266d44dcae52e31e059a50659a1a"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.345045 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-wjdlj" podStartSLOduration=107.345018672 podStartE2EDuration="1m47.345018672s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:15.342064223 +0000 UTC m=+127.634909680" watchObservedRunningTime="2025-12-08 19:31:15.345018672 +0000 UTC m=+127.637864129" Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.352794 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"cefc267509fa165a8d503f57ad104928b642562b90935d150dafd1f825731305"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.378505 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2" event={"ID":"97b35cab-0a8d-4331-8724-cbe640b9e24c","Type":"ContainerStarted","Data":"41d0d2272a135903c20d29a2f68849e6d3443700998cce62a1cd7d2bc0529932"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.385807 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" event={"ID":"75d3ab55-5d06-433f-9c10-5113c2f9f367","Type":"ContainerStarted","Data":"329fedf4a33db05824393f13965a8ba7d68b472fd40b095bae08c3362873f00a"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.391637 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qbjbk" event={"ID":"72043ba9-5052-46eb-8c7c-2e61734cfd17","Type":"ContainerStarted","Data":"665e237559c0413e8794563e608ca9a319d2953ca7d18d038e03b38b2ac0384f"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.391695 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qbjbk" event={"ID":"72043ba9-5052-46eb-8c7c-2e61734cfd17","Type":"ContainerStarted","Data":"290ccbfcb1e229aa071efe699dae742bf623c687b1891ec4ed2377cf62983ba4"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.397946 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" event={"ID":"00a48e62-fdf7-4d8f-846f-295c3cb4489e","Type":"ContainerStarted","Data":"78b30e46fc8d446f215a850f0a4067c36e47bed84cf3250a8cd3688bce12ac09"} Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.432476 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qbjbk" podStartSLOduration=107.432451442 podStartE2EDuration="1m47.432451442s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:15.431679572 +0000 UTC m=+127.724525029" watchObservedRunningTime="2025-12-08 19:31:15.432451442 +0000 UTC m=+127.725296899" Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.441334 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:15 crc kubenswrapper[5118]: E1208 19:31:15.455379 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:15.955356652 +0000 UTC m=+128.248202109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.542391 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:15 crc kubenswrapper[5118]: E1208 19:31:15.542759 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:16.042729902 +0000 UTC m=+128.335575359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.631295 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzpzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:15 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 19:31:15 crc kubenswrapper[5118]: [+]process-running ok Dec 08 19:31:15 crc kubenswrapper[5118]: healthz check failed Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.631720 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podUID="90514180-5ed3-4eb5-b13e-cd3b90998a22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.644746 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:15 crc kubenswrapper[5118]: E1208 19:31:15.645285 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:16.145266165 +0000 UTC m=+128.438111622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.745432 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:15 crc kubenswrapper[5118]: E1208 19:31:15.745886 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:16.245840306 +0000 UTC m=+128.538685873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.847916 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:15 crc kubenswrapper[5118]: E1208 19:31:15.848210 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:16.348199394 +0000 UTC m=+128.641044841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.927001 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-trgjl" Dec 08 19:31:15 crc kubenswrapper[5118]: I1208 19:31:15.948792 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:15 crc kubenswrapper[5118]: E1208 19:31:15.949172 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:16.449154845 +0000 UTC m=+128.742000302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.033786 5118 ???:1] "http: TLS handshake error from 192.168.126.11:40508: no serving certificate available for the kubelet" Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.050853 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:16 crc kubenswrapper[5118]: E1208 19:31:16.051244 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:16.551230146 +0000 UTC m=+128.844075603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.153764 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:16 crc kubenswrapper[5118]: E1208 19:31:16.154036 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:16.654018536 +0000 UTC m=+128.946863993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.254896 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:16 crc kubenswrapper[5118]: E1208 19:31:16.255401 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:16.755382218 +0000 UTC m=+129.048227705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.356336 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:16 crc kubenswrapper[5118]: E1208 19:31:16.357047 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:16.857027567 +0000 UTC m=+129.149873024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.461608 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:16 crc kubenswrapper[5118]: E1208 19:31:16.462332 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:16.962310954 +0000 UTC m=+129.255156411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.512745 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd" event={"ID":"2a73f457-25de-4a7a-8b9b-d4fccf4c27fb","Type":"ContainerStarted","Data":"a89cdd9bfc68050176af0d054399233591fcb7f19a51d142276720700d1f4f8f"} Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.513056 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd" Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.551288 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nhcw5" event={"ID":"f76aa800-8554-45e3-ab38-e5b8efd7c3ad","Type":"ContainerStarted","Data":"ab9fe6df003572635e6f104e5da221a86da22becbd36b06316f2808ad2ee37b5"} Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.553951 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd" Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.562441 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:16 crc kubenswrapper[5118]: E1208 19:31:16.563660 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:17.063636595 +0000 UTC m=+129.356482052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.598528 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wbzcd" podStartSLOduration=108.598511274 podStartE2EDuration="1m48.598511274s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:16.541951677 +0000 UTC m=+128.834797134" watchObservedRunningTime="2025-12-08 19:31:16.598511274 +0000 UTC m=+128.891356731" Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.599352 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wws6k" event={"ID":"35446e1e-d728-44f3-b17f-372a50dbcb73","Type":"ContainerStarted","Data":"7eaee0dd5cab7367a38db7b4834e134d6a4f9917dc88fbf31a87c33d1773e140"} Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.607540 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-qnl9q" event={"ID":"86f2d26a-630b-4a98-9dc3-c1ec245d7b6b","Type":"ContainerStarted","Data":"21a2fc2d19fd662c4c364c6a2b226a9c947a5d683df4ac2f73c6d77e4513d06f"} Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.629732 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzpzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:16 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 19:31:16 crc kubenswrapper[5118]: [+]process-running ok Dec 08 19:31:16 crc kubenswrapper[5118]: healthz check failed Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.630260 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podUID="90514180-5ed3-4eb5-b13e-cd3b90998a22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.635014 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qmvkf" event={"ID":"b9693139-63f6-471e-ae19-744460a6b114","Type":"ContainerStarted","Data":"23e11cba439d7c4c586884f574cc8d5e1de03222c670354d72689e0682e8f7da"} Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.652061 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"bc479abb788fb25800a8fbe3b41767f26ba306e22b64b4b6d25da5759dc54463"} Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.664303 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:16 crc kubenswrapper[5118]: E1208 19:31:16.664610 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:17.164598556 +0000 UTC m=+129.457444013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.671554 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2" event={"ID":"97b35cab-0a8d-4331-8724-cbe640b9e24c","Type":"ContainerStarted","Data":"62d2e6cff824070202b1737059bd8bf959fec8d458747fb4178abfb65807a50d"} Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.675932 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2" Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.708630 5118 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-gtsl2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.708718 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2" podUID="97b35cab-0a8d-4331-8724-cbe640b9e24c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.721680 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2" podStartSLOduration=108.721653517 podStartE2EDuration="1m48.721653517s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:16.72027881 +0000 UTC m=+129.013124267" watchObservedRunningTime="2025-12-08 19:31:16.721653517 +0000 UTC m=+129.014498974" Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.722658 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wws6k" podStartSLOduration=108.722647944 podStartE2EDuration="1m48.722647944s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:16.67223316 +0000 UTC m=+128.965078637" watchObservedRunningTime="2025-12-08 19:31:16.722647944 +0000 UTC m=+129.015493411" Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.764772 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" event={"ID":"75d3ab55-5d06-433f-9c10-5113c2f9f367","Type":"ContainerStarted","Data":"b201d76fc8362fc874649d89aa18438d8905b42d3d13c98deba9801a2e3407a7"} Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.765219 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:16 crc kubenswrapper[5118]: E1208 19:31:16.765459 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:17.265414433 +0000 UTC m=+129.558260070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.765627 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:16 crc kubenswrapper[5118]: E1208 19:31:16.766080 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:17.266073981 +0000 UTC m=+129.558919438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.811032 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pkb44" event={"ID":"d19f60aa-72cf-4a40-a402-300df68ad28f","Type":"ContainerStarted","Data":"17525d5dbb43ba4a290a616f7028c5ee67ddccb29cf7acf720a1982970b1e325"} Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.825847 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-sljmn" event={"ID":"3f876ae2-ff59-421f-8f12-b6d980abb001","Type":"ContainerStarted","Data":"bcc7316a55559206168d34a80f69ea7a4b8d98ee3794b83e8b3358651745dad8"} Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.851077 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lf9n6" event={"ID":"0179285f-606e-490f-b531-c95df3483e77","Type":"ContainerStarted","Data":"ae0999c60eba3cdef9c2e4405badfc65b9fc6c69a901dc69c36a52a2d164c189"} Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.864476 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-z6x7v" podStartSLOduration=108.864456993 podStartE2EDuration="1m48.864456993s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:16.793330317 +0000 UTC m=+129.086175774" watchObservedRunningTime="2025-12-08 19:31:16.864456993 +0000 UTC m=+129.157302450" Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.875794 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.876272 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6xz4z" event={"ID":"6dcf4602-a9b9-40b0-af37-2a69edc555f0","Type":"ContainerStarted","Data":"81cf9619b3813cb764e19354db820269cef6962ba013eb9bbae2040d87a87100"} Dec 08 19:31:16 crc kubenswrapper[5118]: E1208 19:31:16.876581 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:17.376537255 +0000 UTC m=+129.669382712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.896389 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-sljmn" podStartSLOduration=108.896369744 podStartE2EDuration="1m48.896369744s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:16.868335447 +0000 UTC m=+129.161180904" watchObservedRunningTime="2025-12-08 19:31:16.896369744 +0000 UTC m=+129.189215201" Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.897041 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-lf9n6" podStartSLOduration=10.897036501 podStartE2EDuration="10.897036501s" podCreationTimestamp="2025-12-08 19:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:16.895607123 +0000 UTC m=+129.188452590" watchObservedRunningTime="2025-12-08 19:31:16.897036501 +0000 UTC m=+129.189881958" Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.915264 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5" event={"ID":"0b7e81ca-c351-425e-a9e2-ae354f83f8b8","Type":"ContainerStarted","Data":"be7fa23aaf6b7b849f45cb117d40c08528afa8fb4a8d79323e55406bd0d7700b"} Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.948471 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-vjsnr" event={"ID":"1927987d-1fa4-4b00-b6f0-a7861eb10702","Type":"ContainerStarted","Data":"e92e6c32631917f4e0247e3ccfffedd9a093fac50e22a309222d0c4910a8dbe1"} Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.969131 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5" event={"ID":"ab666d86-db2b-4489-a868-8d24159ea775","Type":"ContainerStarted","Data":"8c8f9eeeb059046cd25de131c14b026dab52645d33eb93b95e12da3991b31a32"} Dec 08 19:31:16 crc kubenswrapper[5118]: I1208 19:31:16.991844 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:16 crc kubenswrapper[5118]: E1208 19:31:16.993634 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:17.493619626 +0000 UTC m=+129.786465083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:17 crc kubenswrapper[5118]: I1208 19:31:17.035198 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-hxwm8" event={"ID":"db584c29-faf0-48cd-ac87-3af21a6fcbe4","Type":"ContainerStarted","Data":"30e3ddfa7b0856886b10416d1b4bb1053a6ab532d3ffbd755d67c5e7b5b20759"} Dec 08 19:31:17 crc kubenswrapper[5118]: I1208 19:31:17.037785 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" podUID="574501a5-bb4b-4c42-9046-e00bc9447f56" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://c033c7b04383d901acf213784b3fe0ae6796ec0b77af27e121920822a889e388" gracePeriod=30 Dec 08 19:31:17 crc kubenswrapper[5118]: I1208 19:31:17.096261 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:17 crc kubenswrapper[5118]: E1208 19:31:17.097513 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:17.597477224 +0000 UTC m=+129.890322681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:17 crc kubenswrapper[5118]: I1208 19:31:17.199506 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:17 crc kubenswrapper[5118]: E1208 19:31:17.200066 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:17.700049089 +0000 UTC m=+129.992894546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:17 crc kubenswrapper[5118]: I1208 19:31:17.301362 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:17 crc kubenswrapper[5118]: E1208 19:31:17.301797 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:17.80177443 +0000 UTC m=+130.094619887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:17 crc kubenswrapper[5118]: I1208 19:31:17.403897 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:17 crc kubenswrapper[5118]: E1208 19:31:17.404388 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:17.904369824 +0000 UTC m=+130.197215281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:17 crc kubenswrapper[5118]: I1208 19:31:17.435926 5118 ???:1] "http: TLS handshake error from 192.168.126.11:40516: no serving certificate available for the kubelet" Dec 08 19:31:17 crc kubenswrapper[5118]: I1208 19:31:17.504971 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:17 crc kubenswrapper[5118]: E1208 19:31:17.505433 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:18.005405048 +0000 UTC m=+130.298250495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:17 crc kubenswrapper[5118]: I1208 19:31:17.505548 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:17 crc kubenswrapper[5118]: E1208 19:31:17.505955 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:18.005947812 +0000 UTC m=+130.298793269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:17 crc kubenswrapper[5118]: I1208 19:31:17.610325 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:17 crc kubenswrapper[5118]: E1208 19:31:17.610491 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:18.110456158 +0000 UTC m=+130.403301615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:17 crc kubenswrapper[5118]: I1208 19:31:17.610981 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:17 crc kubenswrapper[5118]: E1208 19:31:17.611255 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:18.111244179 +0000 UTC m=+130.404089636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:17 crc kubenswrapper[5118]: I1208 19:31:17.632079 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzpzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:17 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 19:31:17 crc kubenswrapper[5118]: [+]process-running ok Dec 08 19:31:17 crc kubenswrapper[5118]: healthz check failed Dec 08 19:31:17 crc kubenswrapper[5118]: I1208 19:31:17.632151 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podUID="90514180-5ed3-4eb5-b13e-cd3b90998a22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:17 crc kubenswrapper[5118]: I1208 19:31:17.711814 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:17 crc kubenswrapper[5118]: E1208 19:31:17.712091 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:18.212076497 +0000 UTC m=+130.504921954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:17 crc kubenswrapper[5118]: I1208 19:31:17.813079 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:17 crc kubenswrapper[5118]: E1208 19:31:17.813405 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:18.313391408 +0000 UTC m=+130.606236865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:17 crc kubenswrapper[5118]: I1208 19:31:17.924301 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:17 crc kubenswrapper[5118]: E1208 19:31:17.924525 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:18.424491339 +0000 UTC m=+130.717336806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:17 crc kubenswrapper[5118]: I1208 19:31:17.924985 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:17 crc kubenswrapper[5118]: E1208 19:31:17.925344 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:18.425331302 +0000 UTC m=+130.718176769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.025813 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:18 crc kubenswrapper[5118]: E1208 19:31:18.026123 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:18.526101248 +0000 UTC m=+130.818946715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.072769 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zmlzt" event={"ID":"b138de57-89ae-4cf5-8136-433862988df2","Type":"ContainerStarted","Data":"b7bcd9be808e68024a89b8ea9973779487ccaa05993fd8547d5df41d66265c21"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.076585 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-qnl9q" event={"ID":"86f2d26a-630b-4a98-9dc3-c1ec245d7b6b","Type":"ContainerStarted","Data":"22b6306cb46a959a072ec0d0f3a9ae181c73ba24532873af7e4c0fd73b97368d"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.077588 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-qnl9q" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.078907 5118 patch_prober.go:28] interesting pod/downloads-747b44746d-qnl9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.078962 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-qnl9q" podUID="86f2d26a-630b-4a98-9dc3-c1ec245d7b6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.083869 5118 generic.go:358] "Generic (PLEG): container finished" podID="97e901dc-7a73-42d4-bbb9-3a7391a79105" containerID="09f6a4659137fd9a0be649bcef452ba0d3cc13710028c48dc5270807383f82b5" exitCode=0 Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.084375 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-fz5jn" event={"ID":"97e901dc-7a73-42d4-bbb9-3a7391a79105","Type":"ContainerDied","Data":"09f6a4659137fd9a0be649bcef452ba0d3cc13710028c48dc5270807383f82b5"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.128420 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:18 crc kubenswrapper[5118]: E1208 19:31:18.143177 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:18.643145117 +0000 UTC m=+130.935990574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.178335 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.178372 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" event={"ID":"d5ad6856-ba98-4f91-b102-7e41020e2ecf","Type":"ContainerStarted","Data":"c4d226709fcb65ffa641d9fc169448e1042e2236994196c7d2a5aa5db041021f"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.178392 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qmvkf" event={"ID":"b9693139-63f6-471e-ae19-744460a6b114","Type":"ContainerStarted","Data":"2733112d46d6266bfe8addf538a8e7e5f1a441f174f9b88d53c6ebf8f465efee"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.183404 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zmlzt" podStartSLOduration=110.18338201 podStartE2EDuration="1m50.18338201s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:18.138571425 +0000 UTC m=+130.431416902" watchObservedRunningTime="2025-12-08 19:31:18.18338201 +0000 UTC m=+130.476227467" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.195147 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-47lhr" event={"ID":"e9ee217f-a422-41dc-99a3-72c1dcb1c3e7","Type":"ContainerStarted","Data":"2de4f4e52fbc1893835512ce2540e11892e6d8d4479985d331cf6fa5d50857a4"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.217095 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" event={"ID":"88131373-e414-436f-83e1-9d4aa4b55f62","Type":"ContainerStarted","Data":"814721a261b533a9a86b78e69965b4a15f6541f65a800373fd10ca72e3e5b7d2"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.217973 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.229432 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:18 crc kubenswrapper[5118]: E1208 19:31:18.231436 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:18.73141448 +0000 UTC m=+131.024259937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.243201 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" podStartSLOduration=110.243188164 podStartE2EDuration="1m50.243188164s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:18.240251595 +0000 UTC m=+130.533097052" watchObservedRunningTime="2025-12-08 19:31:18.243188164 +0000 UTC m=+130.536033621" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.244417 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"8d5ed111ef2d42c4d6dcc777022b8728e9218c3a25b1a0f90eb0113d592dca54"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.273594 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" event={"ID":"00a48e62-fdf7-4d8f-846f-295c3cb4489e","Type":"ContainerStarted","Data":"2a346b956025b3d1533f56c2adc5363e403321e0bc14e06df25dedd9c6e209cd"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.275875 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.321931 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pkb44" event={"ID":"d19f60aa-72cf-4a40-a402-300df68ad28f","Type":"ContainerStarted","Data":"2356612b1fdde0e912a8079f43a8d8261eef17be5b1819d5fea510c10935933e"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.322721 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pkb44" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.330617 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:18 crc kubenswrapper[5118]: E1208 19:31:18.332886 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:18.832869365 +0000 UTC m=+131.125715032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.338258 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"5a5117d42ea86c0882f09d9a12c2fcf9c10ebc6753328cafd7e201a0d41ec9c9"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.339164 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.369588 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8" event={"ID":"9f8fbbac-99ac-4a11-9f93-610d12177e71","Type":"ContainerStarted","Data":"c7de87f8c0bbcc06732cd202c4914b7c37db7438c5fec064e003d3d3151d278b"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.370446 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.374539 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" event={"ID":"04382913-99f0-4bca-abaa-952bbb21e06a","Type":"ContainerStarted","Data":"e97101e030b54521823875629b63f0eeba6297a6b8353475bfdb7aaf53bd2e66"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.380053 5118 generic.go:358] "Generic (PLEG): container finished" podID="9556f84b-c3ef-4dd1-8483-67e5960385a1" containerID="a6d8e7abb59533b46ddc5388e8993d0c0ce5a94c07869305a616d238c75daf24" exitCode=0 Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.380442 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" event={"ID":"9556f84b-c3ef-4dd1-8483-67e5960385a1","Type":"ContainerDied","Data":"a6d8e7abb59533b46ddc5388e8993d0c0ce5a94c07869305a616d238c75daf24"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.386945 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kr8np" event={"ID":"4e349745-eed9-4471-abae-b45e90ce805d","Type":"ContainerStarted","Data":"f89930576c2ed33c9fbab037775a6e4dd08a585539bd0e88440cd0e7c683332c"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.400200 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-qnl9q" podStartSLOduration=110.400181049 podStartE2EDuration="1m50.400181049s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:18.340296493 +0000 UTC m=+130.633141950" watchObservedRunningTime="2025-12-08 19:31:18.400181049 +0000 UTC m=+130.693026506" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.407716 5118 generic.go:358] "Generic (PLEG): container finished" podID="f7c859cf-4198-4549-b24d-d5cc7e650257" containerID="27db0da50706042ecb412c6a062a24f65c7fb92d36d9d9f33d9195b617c4bbc3" exitCode=0 Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.407798 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" event={"ID":"f7c859cf-4198-4549-b24d-d5cc7e650257","Type":"ContainerDied","Data":"27db0da50706042ecb412c6a062a24f65c7fb92d36d9d9f33d9195b617c4bbc3"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.425287 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6xz4z" event={"ID":"6dcf4602-a9b9-40b0-af37-2a69edc555f0","Type":"ContainerStarted","Data":"6680472a6b677a81f3d09a312e947097fa29299a5c65a0eb0ea7313a19c65c4b"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.425724 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6xz4z" event={"ID":"6dcf4602-a9b9-40b0-af37-2a69edc555f0","Type":"ContainerStarted","Data":"aab0374c62c4f0f59d2eb528f5b05194d894a057d4ce0e61670d71ea912c4b2d"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.430381 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5" event={"ID":"0b7e81ca-c351-425e-a9e2-ae354f83f8b8","Type":"ContainerStarted","Data":"567219d1b55bdd54e3c93a401676832bb73662b2aa57baa727fcd48b0c8b41cf"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.435655 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:18 crc kubenswrapper[5118]: E1208 19:31:18.435979 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:18.935956582 +0000 UTC m=+131.228802039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.437456 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:18 crc kubenswrapper[5118]: E1208 19:31:18.440470 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:18.940457152 +0000 UTC m=+131.233302779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.459060 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-tkctz" event={"ID":"e65a45b2-4747-4f30-bbfa-d8a711e702e8","Type":"ContainerStarted","Data":"c3ade4087de0ebae71b6fb99816769bf449c0843e650fe7afdb0acde09578d8a"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.459503 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-tkctz" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.474232 5118 patch_prober.go:28] interesting pod/console-operator-67c89758df-tkctz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.474311 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-tkctz" podUID="e65a45b2-4747-4f30-bbfa-d8a711e702e8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.486073 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-vjsnr" event={"ID":"1927987d-1fa4-4b00-b6f0-a7861eb10702","Type":"ContainerStarted","Data":"b655f57424eaa32579e099a47b446ab5c5ca3dd50d555481637f086011571c41"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.506897 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pkb44" podStartSLOduration=110.506734789 podStartE2EDuration="1m50.506734789s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:18.479863693 +0000 UTC m=+130.772709170" watchObservedRunningTime="2025-12-08 19:31:18.506734789 +0000 UTC m=+130.799580236" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.510111 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.541304 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.542376 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-hxwm8" event={"ID":"db584c29-faf0-48cd-ac87-3af21a6fcbe4","Type":"ContainerStarted","Data":"21ac2d1bf3c14161a1e402ba7db0c0fcb329b6428469033a82e98c8b836d07f1"} Dec 08 19:31:18 crc kubenswrapper[5118]: E1208 19:31:18.542568 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:19.042543693 +0000 UTC m=+131.335389150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.542987 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:18 crc kubenswrapper[5118]: E1208 19:31:18.546467 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:19.046451158 +0000 UTC m=+131.339296615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.565968 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-x84b4" event={"ID":"af286630-dbd3-48df-93d0-52acf80a3a67","Type":"ContainerStarted","Data":"c391c82f8de98ca2f7221d0248bd1369a8bc0bd1061b39c52ef493391aac2338"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.579769 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-wb5jl" event={"ID":"97876313-0296-4efa-b7ea-403570a2cd81","Type":"ContainerStarted","Data":"d92070c0498000c603bd8ae771326411aa9cc89d2ef91cdbb4302ee0a8b0dc6f"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.607035 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nhcw5" event={"ID":"f76aa800-8554-45e3-ab38-e5b8efd7c3ad","Type":"ContainerStarted","Data":"6e3aab106364f747b9a20debfdc58f7d6efa3d52946295548bf0860beb4641ec"} Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.632569 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-gtsl2" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.652173 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzpzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:18 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 19:31:18 crc kubenswrapper[5118]: [+]process-running ok Dec 08 19:31:18 crc kubenswrapper[5118]: healthz check failed Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.652279 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podUID="90514180-5ed3-4eb5-b13e-cd3b90998a22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.661091 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:18 crc kubenswrapper[5118]: E1208 19:31:18.661223 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:19.161188436 +0000 UTC m=+131.454033893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.662649 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:18 crc kubenswrapper[5118]: E1208 19:31:18.673281 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:19.173262638 +0000 UTC m=+131.466108095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.722316 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-47lhr" podStartSLOduration=110.722297725 podStartE2EDuration="1m50.722297725s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:18.720299601 +0000 UTC m=+131.013145058" watchObservedRunningTime="2025-12-08 19:31:18.722297725 +0000 UTC m=+131.015143182" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.755219 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" podStartSLOduration=110.755204082 podStartE2EDuration="1m50.755204082s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:18.754648917 +0000 UTC m=+131.047494384" watchObservedRunningTime="2025-12-08 19:31:18.755204082 +0000 UTC m=+131.048049539" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.764127 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:18 crc kubenswrapper[5118]: E1208 19:31:18.764480 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:19.264463309 +0000 UTC m=+131.557308766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.805000 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" podStartSLOduration=110.804984679 podStartE2EDuration="1m50.804984679s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:18.785191932 +0000 UTC m=+131.078037389" watchObservedRunningTime="2025-12-08 19:31:18.804984679 +0000 UTC m=+131.097830136" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.868310 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:18 crc kubenswrapper[5118]: E1208 19:31:18.868611 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:19.368599945 +0000 UTC m=+131.661445402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.907560 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-qmvkf" podStartSLOduration=110.907543463 podStartE2EDuration="1m50.907543463s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:18.904861441 +0000 UTC m=+131.197706898" watchObservedRunningTime="2025-12-08 19:31:18.907543463 +0000 UTC m=+131.200388920" Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.969221 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:18 crc kubenswrapper[5118]: E1208 19:31:18.969402 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:19.469382471 +0000 UTC m=+131.762227928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:18 crc kubenswrapper[5118]: I1208 19:31:18.970640 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:18 crc kubenswrapper[5118]: E1208 19:31:18.970942 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:19.470932622 +0000 UTC m=+131.763778069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.019714 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-x84b4" podStartSLOduration=111.019675082 podStartE2EDuration="1m51.019675082s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:19.015023718 +0000 UTC m=+131.307869195" watchObservedRunningTime="2025-12-08 19:31:19.019675082 +0000 UTC m=+131.312520539" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.046478 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-hxwm8" podStartSLOduration=111.046456365 podStartE2EDuration="1m51.046456365s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:19.043326932 +0000 UTC m=+131.336172389" watchObservedRunningTime="2025-12-08 19:31:19.046456365 +0000 UTC m=+131.339301822" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.072131 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:19 crc kubenswrapper[5118]: E1208 19:31:19.072419 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:19.572401957 +0000 UTC m=+131.865247414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.077488 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kr8np" podStartSLOduration=111.077470352 podStartE2EDuration="1m51.077470352s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:19.076838946 +0000 UTC m=+131.369684403" watchObservedRunningTime="2025-12-08 19:31:19.077470352 +0000 UTC m=+131.370315809" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.139390 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-tkctz" podStartSLOduration=111.139372293 podStartE2EDuration="1m51.139372293s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:19.106154537 +0000 UTC m=+131.398999994" watchObservedRunningTime="2025-12-08 19:31:19.139372293 +0000 UTC m=+131.432217750" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.155955 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nhcw5" podStartSLOduration=111.155939753 podStartE2EDuration="1m51.155939753s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:19.154396152 +0000 UTC m=+131.447241609" watchObservedRunningTime="2025-12-08 19:31:19.155939753 +0000 UTC m=+131.448785210" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.173317 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:19 crc kubenswrapper[5118]: E1208 19:31:19.173680 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:19.673668576 +0000 UTC m=+131.966514033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.257046 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.269944 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-bjxx7" podStartSLOduration=111.269909152 podStartE2EDuration="1m51.269909152s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:19.268275639 +0000 UTC m=+131.561121096" watchObservedRunningTime="2025-12-08 19:31:19.269909152 +0000 UTC m=+131.562754609" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.275208 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:19 crc kubenswrapper[5118]: E1208 19:31:19.275771 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:19.775751228 +0000 UTC m=+132.068596685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.339263 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6xz4z" podStartSLOduration=111.339235869 podStartE2EDuration="1m51.339235869s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:19.336964069 +0000 UTC m=+131.629809526" watchObservedRunningTime="2025-12-08 19:31:19.339235869 +0000 UTC m=+131.632081326" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.376552 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:19 crc kubenswrapper[5118]: E1208 19:31:19.376877 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:19.876865363 +0000 UTC m=+132.169710820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.402306 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q45w8" podStartSLOduration=111.40228107 podStartE2EDuration="1m51.40228107s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:19.400259716 +0000 UTC m=+131.693105173" watchObservedRunningTime="2025-12-08 19:31:19.40228107 +0000 UTC m=+131.695126527" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.443115 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-z6tr5" podStartSLOduration=111.443102199 podStartE2EDuration="1m51.443102199s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:19.440394217 +0000 UTC m=+131.733239674" watchObservedRunningTime="2025-12-08 19:31:19.443102199 +0000 UTC m=+131.735947656" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.468472 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7px9v"] Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.478231 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:19 crc kubenswrapper[5118]: E1208 19:31:19.478494 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:19.978477061 +0000 UTC m=+132.271322518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.507216 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-wb5jl" podStartSLOduration=111.507203927 podStartE2EDuration="1m51.507203927s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:19.483730572 +0000 UTC m=+131.776576029" watchObservedRunningTime="2025-12-08 19:31:19.507203927 +0000 UTC m=+131.800049384" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.530704 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7px9v"] Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.530856 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7px9v" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.550834 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.588131 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.588311 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d799616-15c0-4e4f-8cbb-5f33d9f607ef-catalog-content\") pod \"certified-operators-7px9v\" (UID: \"6d799616-15c0-4e4f-8cbb-5f33d9f607ef\") " pod="openshift-marketplace/certified-operators-7px9v" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.588392 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b5gm\" (UniqueName: \"kubernetes.io/projected/6d799616-15c0-4e4f-8cbb-5f33d9f607ef-kube-api-access-5b5gm\") pod \"certified-operators-7px9v\" (UID: \"6d799616-15c0-4e4f-8cbb-5f33d9f607ef\") " pod="openshift-marketplace/certified-operators-7px9v" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.588427 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d799616-15c0-4e4f-8cbb-5f33d9f607ef-utilities\") pod \"certified-operators-7px9v\" (UID: \"6d799616-15c0-4e4f-8cbb-5f33d9f607ef\") " pod="openshift-marketplace/certified-operators-7px9v" Dec 08 19:31:19 crc kubenswrapper[5118]: E1208 19:31:19.589146 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:20.089126251 +0000 UTC m=+132.381971708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.598308 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-vjsnr" podStartSLOduration=111.598280564 podStartE2EDuration="1m51.598280564s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:19.571140151 +0000 UTC m=+131.863985608" watchObservedRunningTime="2025-12-08 19:31:19.598280564 +0000 UTC m=+131.891126021" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.649995 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzpzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:19 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 19:31:19 crc kubenswrapper[5118]: [+]process-running ok Dec 08 19:31:19 crc kubenswrapper[5118]: healthz check failed Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.650068 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podUID="90514180-5ed3-4eb5-b13e-cd3b90998a22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.691277 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cs27m"] Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.694083 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.694463 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d799616-15c0-4e4f-8cbb-5f33d9f607ef-catalog-content\") pod \"certified-operators-7px9v\" (UID: \"6d799616-15c0-4e4f-8cbb-5f33d9f607ef\") " pod="openshift-marketplace/certified-operators-7px9v" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.694514 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5b5gm\" (UniqueName: \"kubernetes.io/projected/6d799616-15c0-4e4f-8cbb-5f33d9f607ef-kube-api-access-5b5gm\") pod \"certified-operators-7px9v\" (UID: \"6d799616-15c0-4e4f-8cbb-5f33d9f607ef\") " pod="openshift-marketplace/certified-operators-7px9v" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.694596 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d799616-15c0-4e4f-8cbb-5f33d9f607ef-utilities\") pod \"certified-operators-7px9v\" (UID: \"6d799616-15c0-4e4f-8cbb-5f33d9f607ef\") " pod="openshift-marketplace/certified-operators-7px9v" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.695410 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d799616-15c0-4e4f-8cbb-5f33d9f607ef-utilities\") pod \"certified-operators-7px9v\" (UID: \"6d799616-15c0-4e4f-8cbb-5f33d9f607ef\") " pod="openshift-marketplace/certified-operators-7px9v" Dec 08 19:31:19 crc kubenswrapper[5118]: E1208 19:31:19.695897 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:20.195874986 +0000 UTC m=+132.488720443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.696522 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d799616-15c0-4e4f-8cbb-5f33d9f607ef-catalog-content\") pod \"certified-operators-7px9v\" (UID: \"6d799616-15c0-4e4f-8cbb-5f33d9f607ef\") " pod="openshift-marketplace/certified-operators-7px9v" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.701736 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cs27m" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.704410 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" event={"ID":"9556f84b-c3ef-4dd1-8483-67e5960385a1","Type":"ContainerStarted","Data":"33c88fe8edc23495f7814f305cbe263851e7a4c74a33dfecd482bb78c900262d"} Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.712087 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.724249 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" event={"ID":"f7c859cf-4198-4549-b24d-d5cc7e650257","Type":"ContainerStarted","Data":"d1f37e78475dc4ab040e65e658af020341b011c475095f8a3d7667c58b2453e4"} Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.735284 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cs27m"] Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.774226 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-x84b4" event={"ID":"af286630-dbd3-48df-93d0-52acf80a3a67","Type":"ContainerStarted","Data":"1854ecf1b4769f61ba4169806882f80bfb81c74386c61fe4cf96381e7027f410"} Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.778885 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b5gm\" (UniqueName: \"kubernetes.io/projected/6d799616-15c0-4e4f-8cbb-5f33d9f607ef-kube-api-access-5b5gm\") pod \"certified-operators-7px9v\" (UID: \"6d799616-15c0-4e4f-8cbb-5f33d9f607ef\") " pod="openshift-marketplace/certified-operators-7px9v" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.793607 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-fz5jn" event={"ID":"97e901dc-7a73-42d4-bbb9-3a7391a79105","Type":"ContainerStarted","Data":"e49dee22224d0dcf2a1b9197c325946b96f02e38493f9937435e0a52f65ac635"} Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.794584 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-fz5jn" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.810625 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qmvkf" event={"ID":"b9693139-63f6-471e-ae19-744460a6b114","Type":"ContainerStarted","Data":"38fb5d421329f6b083e91ffce0196eb9d110b8202abb8cffc66849599f9fd7d8"} Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.813481 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9801ce4f-e9bf-4c09-a624-81675bbda6fa-utilities\") pod \"community-operators-cs27m\" (UID: \"9801ce4f-e9bf-4c09-a624-81675bbda6fa\") " pod="openshift-marketplace/community-operators-cs27m" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.813557 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.813724 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9801ce4f-e9bf-4c09-a624-81675bbda6fa-catalog-content\") pod \"community-operators-cs27m\" (UID: \"9801ce4f-e9bf-4c09-a624-81675bbda6fa\") " pod="openshift-marketplace/community-operators-cs27m" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.813795 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp46w\" (UniqueName: \"kubernetes.io/projected/9801ce4f-e9bf-4c09-a624-81675bbda6fa-kube-api-access-dp46w\") pod \"community-operators-cs27m\" (UID: \"9801ce4f-e9bf-4c09-a624-81675bbda6fa\") " pod="openshift-marketplace/community-operators-cs27m" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.823004 5118 patch_prober.go:28] interesting pod/downloads-747b44746d-qnl9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.823083 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-qnl9q" podUID="86f2d26a-630b-4a98-9dc3-c1ec245d7b6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.825278 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" podStartSLOduration=111.825260516 podStartE2EDuration="1m51.825260516s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:19.82358119 +0000 UTC m=+132.116426657" watchObservedRunningTime="2025-12-08 19:31:19.825260516 +0000 UTC m=+132.118105973" Dec 08 19:31:19 crc kubenswrapper[5118]: E1208 19:31:19.826147 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:20.326129738 +0000 UTC m=+132.618975195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.855129 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-fz5jn" podStartSLOduration=111.855102901 podStartE2EDuration="1m51.855102901s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:19.854436512 +0000 UTC m=+132.147281969" watchObservedRunningTime="2025-12-08 19:31:19.855102901 +0000 UTC m=+132.147948358" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.865790 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7px9v" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.914782 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.916023 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dp46w\" (UniqueName: \"kubernetes.io/projected/9801ce4f-e9bf-4c09-a624-81675bbda6fa-kube-api-access-dp46w\") pod \"community-operators-cs27m\" (UID: \"9801ce4f-e9bf-4c09-a624-81675bbda6fa\") " pod="openshift-marketplace/community-operators-cs27m" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.916450 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9801ce4f-e9bf-4c09-a624-81675bbda6fa-utilities\") pod \"community-operators-cs27m\" (UID: \"9801ce4f-e9bf-4c09-a624-81675bbda6fa\") " pod="openshift-marketplace/community-operators-cs27m" Dec 08 19:31:19 crc kubenswrapper[5118]: E1208 19:31:19.919596 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:20.419573659 +0000 UTC m=+132.712419116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.921090 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9801ce4f-e9bf-4c09-a624-81675bbda6fa-catalog-content\") pod \"community-operators-cs27m\" (UID: \"9801ce4f-e9bf-4c09-a624-81675bbda6fa\") " pod="openshift-marketplace/community-operators-cs27m" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.940090 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9801ce4f-e9bf-4c09-a624-81675bbda6fa-utilities\") pod \"community-operators-cs27m\" (UID: \"9801ce4f-e9bf-4c09-a624-81675bbda6fa\") " pod="openshift-marketplace/community-operators-cs27m" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.942067 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9801ce4f-e9bf-4c09-a624-81675bbda6fa-catalog-content\") pod \"community-operators-cs27m\" (UID: \"9801ce4f-e9bf-4c09-a624-81675bbda6fa\") " pod="openshift-marketplace/community-operators-cs27m" Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.945977 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t8htv"] Dec 08 19:31:19 crc kubenswrapper[5118]: I1208 19:31:19.976455 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp46w\" (UniqueName: \"kubernetes.io/projected/9801ce4f-e9bf-4c09-a624-81675bbda6fa-kube-api-access-dp46w\") pod \"community-operators-cs27m\" (UID: \"9801ce4f-e9bf-4c09-a624-81675bbda6fa\") " pod="openshift-marketplace/community-operators-cs27m" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.016668 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t8htv"] Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.016789 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-tkctz" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.017256 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8htv" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.022012 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:20 crc kubenswrapper[5118]: E1208 19:31:20.022447 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:20.5224305 +0000 UTC m=+132.815275957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.052314 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5qrgm"] Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.053023 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cs27m" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.056506 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qrgm" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.062055 5118 ???:1] "http: TLS handshake error from 192.168.126.11:49788: no serving certificate available for the kubelet" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.077181 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5qrgm"] Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.127790 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.128362 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdc926a9-b83b-4c7d-9558-98ab053066a1-catalog-content\") pod \"community-operators-5qrgm\" (UID: \"fdc926a9-b83b-4c7d-9558-98ab053066a1\") " pod="openshift-marketplace/community-operators-5qrgm" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.128449 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdc926a9-b83b-4c7d-9558-98ab053066a1-utilities\") pod \"community-operators-5qrgm\" (UID: \"fdc926a9-b83b-4c7d-9558-98ab053066a1\") " pod="openshift-marketplace/community-operators-5qrgm" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.128481 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7e11da8-7a5b-49b5-a421-678c6c8fc10e-catalog-content\") pod \"certified-operators-t8htv\" (UID: \"c7e11da8-7a5b-49b5-a421-678c6c8fc10e\") " pod="openshift-marketplace/certified-operators-t8htv" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.128503 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j47x\" (UniqueName: \"kubernetes.io/projected/fdc926a9-b83b-4c7d-9558-98ab053066a1-kube-api-access-5j47x\") pod \"community-operators-5qrgm\" (UID: \"fdc926a9-b83b-4c7d-9558-98ab053066a1\") " pod="openshift-marketplace/community-operators-5qrgm" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.128537 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7e11da8-7a5b-49b5-a421-678c6c8fc10e-utilities\") pod \"certified-operators-t8htv\" (UID: \"c7e11da8-7a5b-49b5-a421-678c6c8fc10e\") " pod="openshift-marketplace/certified-operators-t8htv" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.128558 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln8cm\" (UniqueName: \"kubernetes.io/projected/c7e11da8-7a5b-49b5-a421-678c6c8fc10e-kube-api-access-ln8cm\") pod \"certified-operators-t8htv\" (UID: \"c7e11da8-7a5b-49b5-a421-678c6c8fc10e\") " pod="openshift-marketplace/certified-operators-t8htv" Dec 08 19:31:20 crc kubenswrapper[5118]: E1208 19:31:20.128754 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:20.628726664 +0000 UTC m=+132.921572121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.196936 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.212049 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.221708 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.224647 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.224725 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.233333 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02df487c-5002-42fc-940c-02d7df55f614-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"02df487c-5002-42fc-940c-02d7df55f614\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.233456 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdc926a9-b83b-4c7d-9558-98ab053066a1-utilities\") pod \"community-operators-5qrgm\" (UID: \"fdc926a9-b83b-4c7d-9558-98ab053066a1\") " pod="openshift-marketplace/community-operators-5qrgm" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.233534 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7e11da8-7a5b-49b5-a421-678c6c8fc10e-catalog-content\") pod \"certified-operators-t8htv\" (UID: \"c7e11da8-7a5b-49b5-a421-678c6c8fc10e\") " pod="openshift-marketplace/certified-operators-t8htv" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.233633 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5j47x\" (UniqueName: \"kubernetes.io/projected/fdc926a9-b83b-4c7d-9558-98ab053066a1-kube-api-access-5j47x\") pod \"community-operators-5qrgm\" (UID: \"fdc926a9-b83b-4c7d-9558-98ab053066a1\") " pod="openshift-marketplace/community-operators-5qrgm" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.233736 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7e11da8-7a5b-49b5-a421-678c6c8fc10e-utilities\") pod \"certified-operators-t8htv\" (UID: \"c7e11da8-7a5b-49b5-a421-678c6c8fc10e\") " pod="openshift-marketplace/certified-operators-t8htv" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.233821 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ln8cm\" (UniqueName: \"kubernetes.io/projected/c7e11da8-7a5b-49b5-a421-678c6c8fc10e-kube-api-access-ln8cm\") pod \"certified-operators-t8htv\" (UID: \"c7e11da8-7a5b-49b5-a421-678c6c8fc10e\") " pod="openshift-marketplace/certified-operators-t8htv" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.233922 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.233994 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02df487c-5002-42fc-940c-02d7df55f614-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"02df487c-5002-42fc-940c-02d7df55f614\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.234072 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdc926a9-b83b-4c7d-9558-98ab053066a1-catalog-content\") pod \"community-operators-5qrgm\" (UID: \"fdc926a9-b83b-4c7d-9558-98ab053066a1\") " pod="openshift-marketplace/community-operators-5qrgm" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.236230 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdc926a9-b83b-4c7d-9558-98ab053066a1-utilities\") pod \"community-operators-5qrgm\" (UID: \"fdc926a9-b83b-4c7d-9558-98ab053066a1\") " pod="openshift-marketplace/community-operators-5qrgm" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.236589 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7e11da8-7a5b-49b5-a421-678c6c8fc10e-catalog-content\") pod \"certified-operators-t8htv\" (UID: \"c7e11da8-7a5b-49b5-a421-678c6c8fc10e\") " pod="openshift-marketplace/certified-operators-t8htv" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.237038 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7e11da8-7a5b-49b5-a421-678c6c8fc10e-utilities\") pod \"certified-operators-t8htv\" (UID: \"c7e11da8-7a5b-49b5-a421-678c6c8fc10e\") " pod="openshift-marketplace/certified-operators-t8htv" Dec 08 19:31:20 crc kubenswrapper[5118]: E1208 19:31:20.237342 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:20.737329729 +0000 UTC m=+133.030175186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.237819 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdc926a9-b83b-4c7d-9558-98ab053066a1-catalog-content\") pod \"community-operators-5qrgm\" (UID: \"fdc926a9-b83b-4c7d-9558-98ab053066a1\") " pod="openshift-marketplace/community-operators-5qrgm" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.279849 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j47x\" (UniqueName: \"kubernetes.io/projected/fdc926a9-b83b-4c7d-9558-98ab053066a1-kube-api-access-5j47x\") pod \"community-operators-5qrgm\" (UID: \"fdc926a9-b83b-4c7d-9558-98ab053066a1\") " pod="openshift-marketplace/community-operators-5qrgm" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.297596 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln8cm\" (UniqueName: \"kubernetes.io/projected/c7e11da8-7a5b-49b5-a421-678c6c8fc10e-kube-api-access-ln8cm\") pod \"certified-operators-t8htv\" (UID: \"c7e11da8-7a5b-49b5-a421-678c6c8fc10e\") " pod="openshift-marketplace/certified-operators-t8htv" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.338912 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8htv" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.343434 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.343774 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02df487c-5002-42fc-940c-02d7df55f614-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"02df487c-5002-42fc-940c-02d7df55f614\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.343849 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02df487c-5002-42fc-940c-02d7df55f614-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"02df487c-5002-42fc-940c-02d7df55f614\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.344020 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02df487c-5002-42fc-940c-02d7df55f614-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"02df487c-5002-42fc-940c-02d7df55f614\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:20 crc kubenswrapper[5118]: E1208 19:31:20.344116 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:20.844093485 +0000 UTC m=+133.136938942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.411566 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qrgm" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.446762 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:20 crc kubenswrapper[5118]: E1208 19:31:20.447475 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:20.94744463 +0000 UTC m=+133.240290087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.448731 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02df487c-5002-42fc-940c-02d7df55f614-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"02df487c-5002-42fc-940c-02d7df55f614\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.548609 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:20 crc kubenswrapper[5118]: E1208 19:31:20.548833 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:21.048800821 +0000 UTC m=+133.341646278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.549998 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:20 crc kubenswrapper[5118]: E1208 19:31:20.550368 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:21.050352542 +0000 UTC m=+133.343198029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.575033 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.632349 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzpzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:20 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 19:31:20 crc kubenswrapper[5118]: [+]process-running ok Dec 08 19:31:20 crc kubenswrapper[5118]: healthz check failed Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.632441 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podUID="90514180-5ed3-4eb5-b13e-cd3b90998a22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.655574 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:20 crc kubenswrapper[5118]: E1208 19:31:20.655916 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:21.155893626 +0000 UTC m=+133.448739083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.759563 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:20 crc kubenswrapper[5118]: E1208 19:31:20.759981 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:21.2599693 +0000 UTC m=+133.552814757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.790970 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7px9v"] Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.824104 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" event={"ID":"9556f84b-c3ef-4dd1-8483-67e5960385a1","Type":"ContainerStarted","Data":"b822f8cb2d2a1f34c4224ffb7a678cd2f22d2b9b25c568a310f471b003cb7c72"} Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.829566 5118 patch_prober.go:28] interesting pod/downloads-747b44746d-qnl9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.829660 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-qnl9q" podUID="86f2d26a-630b-4a98-9dc3-c1ec245d7b6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.861469 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:20 crc kubenswrapper[5118]: E1208 19:31:20.864044 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:21.364024633 +0000 UTC m=+133.656870090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.873042 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" podStartSLOduration=112.873011413 podStartE2EDuration="1m52.873011413s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:20.869670784 +0000 UTC m=+133.162516251" watchObservedRunningTime="2025-12-08 19:31:20.873011413 +0000 UTC m=+133.165856860" Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.923945 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cs27m"] Dec 08 19:31:20 crc kubenswrapper[5118]: I1208 19:31:20.963567 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:20 crc kubenswrapper[5118]: E1208 19:31:20.963896 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:21.463883306 +0000 UTC m=+133.756728763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.050657 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5qrgm"] Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.071029 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:21 crc kubenswrapper[5118]: E1208 19:31:21.071364 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:21.571343929 +0000 UTC m=+133.864189386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.140844 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 19:31:21 crc kubenswrapper[5118]: W1208 19:31:21.162596 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod02df487c_5002_42fc_940c_02d7df55f614.slice/crio-acd56b01c07bc83cda9a50e3a4f0443360c016636955a12a431a06dd55f298fb WatchSource:0}: Error finding container acd56b01c07bc83cda9a50e3a4f0443360c016636955a12a431a06dd55f298fb: Status 404 returned error can't find the container with id acd56b01c07bc83cda9a50e3a4f0443360c016636955a12a431a06dd55f298fb Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.173612 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:21 crc kubenswrapper[5118]: E1208 19:31:21.173948 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:21.673934645 +0000 UTC m=+133.966780102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.176674 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t8htv"] Dec 08 19:31:21 crc kubenswrapper[5118]: W1208 19:31:21.193796 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7e11da8_7a5b_49b5_a421_678c6c8fc10e.slice/crio-973dfe9827d648c3ab46b89f6c850d965307522b1d44727e1a75689499bda12c WatchSource:0}: Error finding container 973dfe9827d648c3ab46b89f6c850d965307522b1d44727e1a75689499bda12c: Status 404 returned error can't find the container with id 973dfe9827d648c3ab46b89f6c850d965307522b1d44727e1a75689499bda12c Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.274753 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:21 crc kubenswrapper[5118]: E1208 19:31:21.274930 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:21.77490429 +0000 UTC m=+134.067749747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.275036 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:21 crc kubenswrapper[5118]: E1208 19:31:21.275348 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:21.775336671 +0000 UTC m=+134.068182128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.377353 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:21 crc kubenswrapper[5118]: E1208 19:31:21.377578 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:21.87754364 +0000 UTC m=+134.170389097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.377744 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:21 crc kubenswrapper[5118]: E1208 19:31:21.378093 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:21.878082475 +0000 UTC m=+134.170928092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.442540 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8rpxq"] Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.479095 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:21 crc kubenswrapper[5118]: E1208 19:31:21.479300 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:21.979270136 +0000 UTC m=+134.272115603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.580392 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:21 crc kubenswrapper[5118]: E1208 19:31:21.580675 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:22.080663473 +0000 UTC m=+134.373508930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.593853 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8rpxq"] Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.594152 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8rpxq" Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.598059 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.622832 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.647620 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzpzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:21 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 19:31:21 crc kubenswrapper[5118]: [+]process-running ok Dec 08 19:31:21 crc kubenswrapper[5118]: healthz check failed Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.647960 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podUID="90514180-5ed3-4eb5-b13e-cd3b90998a22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.682011 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:21 crc kubenswrapper[5118]: E1208 19:31:21.683228 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:22.18319203 +0000 UTC m=+134.476037487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.683505 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbrl8\" (UniqueName: \"kubernetes.io/projected/70414740-2872-4ebd-b3b5-ded149c0f019-kube-api-access-bbrl8\") pod \"redhat-marketplace-8rpxq\" (UID: \"70414740-2872-4ebd-b3b5-ded149c0f019\") " pod="openshift-marketplace/redhat-marketplace-8rpxq" Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.684126 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70414740-2872-4ebd-b3b5-ded149c0f019-utilities\") pod \"redhat-marketplace-8rpxq\" (UID: \"70414740-2872-4ebd-b3b5-ded149c0f019\") " pod="openshift-marketplace/redhat-marketplace-8rpxq" Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.684321 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70414740-2872-4ebd-b3b5-ded149c0f019-catalog-content\") pod \"redhat-marketplace-8rpxq\" (UID: \"70414740-2872-4ebd-b3b5-ded149c0f019\") " pod="openshift-marketplace/redhat-marketplace-8rpxq" Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.684504 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:21 crc kubenswrapper[5118]: E1208 19:31:21.684991 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:22.184982929 +0000 UTC m=+134.477828386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.798541 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.799286 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bbrl8\" (UniqueName: \"kubernetes.io/projected/70414740-2872-4ebd-b3b5-ded149c0f019-kube-api-access-bbrl8\") pod \"redhat-marketplace-8rpxq\" (UID: \"70414740-2872-4ebd-b3b5-ded149c0f019\") " pod="openshift-marketplace/redhat-marketplace-8rpxq" Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.799320 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70414740-2872-4ebd-b3b5-ded149c0f019-utilities\") pod \"redhat-marketplace-8rpxq\" (UID: \"70414740-2872-4ebd-b3b5-ded149c0f019\") " pod="openshift-marketplace/redhat-marketplace-8rpxq" Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.799349 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70414740-2872-4ebd-b3b5-ded149c0f019-catalog-content\") pod \"redhat-marketplace-8rpxq\" (UID: \"70414740-2872-4ebd-b3b5-ded149c0f019\") " pod="openshift-marketplace/redhat-marketplace-8rpxq" Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.799931 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70414740-2872-4ebd-b3b5-ded149c0f019-catalog-content\") pod \"redhat-marketplace-8rpxq\" (UID: \"70414740-2872-4ebd-b3b5-ded149c0f019\") " pod="openshift-marketplace/redhat-marketplace-8rpxq" Dec 08 19:31:21 crc kubenswrapper[5118]: E1208 19:31:21.800017 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:22.299994753 +0000 UTC m=+134.592840210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.800739 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70414740-2872-4ebd-b3b5-ded149c0f019-utilities\") pod \"redhat-marketplace-8rpxq\" (UID: \"70414740-2872-4ebd-b3b5-ded149c0f019\") " pod="openshift-marketplace/redhat-marketplace-8rpxq" Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.813999 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.815769 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.846898 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.848165 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.849287 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbrl8\" (UniqueName: \"kubernetes.io/projected/70414740-2872-4ebd-b3b5-ded149c0f019-kube-api-access-bbrl8\") pod \"redhat-marketplace-8rpxq\" (UID: \"70414740-2872-4ebd-b3b5-ded149c0f019\") " pod="openshift-marketplace/redhat-marketplace-8rpxq" Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.873741 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j28hm"] Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.905951 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:21 crc kubenswrapper[5118]: E1208 19:31:21.906431 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:22.406411545 +0000 UTC m=+134.699257002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.909729 5118 generic.go:358] "Generic (PLEG): container finished" podID="6d799616-15c0-4e4f-8cbb-5f33d9f607ef" containerID="bed79bb93d24e2d1a555ecf210feee48f52228ceb9ef28fb1551e0680bb48339" exitCode=0 Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.944648 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8rpxq" Dec 08 19:31:21 crc kubenswrapper[5118]: I1208 19:31:21.991791 5118 generic.go:358] "Generic (PLEG): container finished" podID="9801ce4f-e9bf-4c09-a624-81675bbda6fa" containerID="40a6430fc96c6a250f5e043f76d146894da29d7d7d13546c7c964b8359716a34" exitCode=0 Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.014527 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:22 crc kubenswrapper[5118]: E1208 19:31:22.014955 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:22.514936083 +0000 UTC m=+134.807781540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.092364 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j28hm"] Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.092730 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7px9v" event={"ID":"6d799616-15c0-4e4f-8cbb-5f33d9f607ef","Type":"ContainerDied","Data":"bed79bb93d24e2d1a555ecf210feee48f52228ceb9ef28fb1551e0680bb48339"} Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.092610 5118 patch_prober.go:28] interesting pod/downloads-747b44746d-qnl9q container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.092807 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-qnl9q" podUID="86f2d26a-630b-4a98-9dc3-c1ec245d7b6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.093393 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.093419 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7px9v" event={"ID":"6d799616-15c0-4e4f-8cbb-5f33d9f607ef","Type":"ContainerStarted","Data":"cba87a5bd1ba6619a772d7ab1824302c196b62b48ebbda440689c9861ed90f87"} Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.093435 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" event={"ID":"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6","Type":"ContainerStarted","Data":"3f04a629c2ce5b7ce93cbc0a52b69f7a16b1205d327fcff5924eeb7c95947838"} Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.093449 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cs27m" event={"ID":"9801ce4f-e9bf-4c09-a624-81675bbda6fa","Type":"ContainerDied","Data":"40a6430fc96c6a250f5e043f76d146894da29d7d7d13546c7c964b8359716a34"} Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.093461 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cs27m" event={"ID":"9801ce4f-e9bf-4c09-a624-81675bbda6fa","Type":"ContainerStarted","Data":"bfcc1263f82200d98ec2e9b37f84cf8657adb645fe1317e0c7485ba5e0baab9e"} Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.093469 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qrgm" event={"ID":"fdc926a9-b83b-4c7d-9558-98ab053066a1","Type":"ContainerStarted","Data":"0da51729f669d267f79c5f2d60d3a62b48cb3decf8bd5fa07706b364dff02185"} Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.093480 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"02df487c-5002-42fc-940c-02d7df55f614","Type":"ContainerStarted","Data":"acd56b01c07bc83cda9a50e3a4f0443360c016636955a12a431a06dd55f298fb"} Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.093490 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8htv" event={"ID":"c7e11da8-7a5b-49b5-a421-678c6c8fc10e","Type":"ContainerStarted","Data":"973dfe9827d648c3ab46b89f6c850d965307522b1d44727e1a75689499bda12c"} Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.093905 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j28hm" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.116598 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.116706 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df6df\" (UniqueName: \"kubernetes.io/projected/4d00dd31-7ee8-4424-946d-c67a1cbe55b7-kube-api-access-df6df\") pod \"redhat-marketplace-j28hm\" (UID: \"4d00dd31-7ee8-4424-946d-c67a1cbe55b7\") " pod="openshift-marketplace/redhat-marketplace-j28hm" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.116894 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d00dd31-7ee8-4424-946d-c67a1cbe55b7-catalog-content\") pod \"redhat-marketplace-j28hm\" (UID: \"4d00dd31-7ee8-4424-946d-c67a1cbe55b7\") " pod="openshift-marketplace/redhat-marketplace-j28hm" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.117104 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d00dd31-7ee8-4424-946d-c67a1cbe55b7-utilities\") pod \"redhat-marketplace-j28hm\" (UID: \"4d00dd31-7ee8-4424-946d-c67a1cbe55b7\") " pod="openshift-marketplace/redhat-marketplace-j28hm" Dec 08 19:31:22 crc kubenswrapper[5118]: E1208 19:31:22.119424 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:22.619407274 +0000 UTC m=+134.912252801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.133356 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-fz5jn" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.133818 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jpdh9" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.170247 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.170287 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.171962 5118 patch_prober.go:28] interesting pod/console-64d44f6ddf-hxwm8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.172005 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-hxwm8" podUID="db584c29-faf0-48cd-ac87-3af21a6fcbe4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.221297 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:22 crc kubenswrapper[5118]: E1208 19:31:22.222144 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:22.722127506 +0000 UTC m=+135.014972963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.222219 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d00dd31-7ee8-4424-946d-c67a1cbe55b7-catalog-content\") pod \"redhat-marketplace-j28hm\" (UID: \"4d00dd31-7ee8-4424-946d-c67a1cbe55b7\") " pod="openshift-marketplace/redhat-marketplace-j28hm" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.222275 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d00dd31-7ee8-4424-946d-c67a1cbe55b7-utilities\") pod \"redhat-marketplace-j28hm\" (UID: \"4d00dd31-7ee8-4424-946d-c67a1cbe55b7\") " pod="openshift-marketplace/redhat-marketplace-j28hm" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.222329 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.222372 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-df6df\" (UniqueName: \"kubernetes.io/projected/4d00dd31-7ee8-4424-946d-c67a1cbe55b7-kube-api-access-df6df\") pod \"redhat-marketplace-j28hm\" (UID: \"4d00dd31-7ee8-4424-946d-c67a1cbe55b7\") " pod="openshift-marketplace/redhat-marketplace-j28hm" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.223657 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d00dd31-7ee8-4424-946d-c67a1cbe55b7-catalog-content\") pod \"redhat-marketplace-j28hm\" (UID: \"4d00dd31-7ee8-4424-946d-c67a1cbe55b7\") " pod="openshift-marketplace/redhat-marketplace-j28hm" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.224229 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d00dd31-7ee8-4424-946d-c67a1cbe55b7-utilities\") pod \"redhat-marketplace-j28hm\" (UID: \"4d00dd31-7ee8-4424-946d-c67a1cbe55b7\") " pod="openshift-marketplace/redhat-marketplace-j28hm" Dec 08 19:31:22 crc kubenswrapper[5118]: E1208 19:31:22.225098 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:22.725086135 +0000 UTC m=+135.017931582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.252427 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-df6df\" (UniqueName: \"kubernetes.io/projected/4d00dd31-7ee8-4424-946d-c67a1cbe55b7-kube-api-access-df6df\") pod \"redhat-marketplace-j28hm\" (UID: \"4d00dd31-7ee8-4424-946d-c67a1cbe55b7\") " pod="openshift-marketplace/redhat-marketplace-j28hm" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.310780 5118 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-8vsfg container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 08 19:31:22 crc kubenswrapper[5118]: [+]log ok Dec 08 19:31:22 crc kubenswrapper[5118]: [+]etcd ok Dec 08 19:31:22 crc kubenswrapper[5118]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 08 19:31:22 crc kubenswrapper[5118]: [+]poststarthook/generic-apiserver-start-informers ok Dec 08 19:31:22 crc kubenswrapper[5118]: [+]poststarthook/max-in-flight-filter ok Dec 08 19:31:22 crc kubenswrapper[5118]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 08 19:31:22 crc kubenswrapper[5118]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 08 19:31:22 crc kubenswrapper[5118]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 08 19:31:22 crc kubenswrapper[5118]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 08 19:31:22 crc kubenswrapper[5118]: [+]poststarthook/project.openshift.io-projectcache ok Dec 08 19:31:22 crc kubenswrapper[5118]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 08 19:31:22 crc kubenswrapper[5118]: [+]poststarthook/openshift.io-startinformers ok Dec 08 19:31:22 crc kubenswrapper[5118]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 08 19:31:22 crc kubenswrapper[5118]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 08 19:31:22 crc kubenswrapper[5118]: livez check failed Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.310854 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" podUID="9556f84b-c3ef-4dd1-8483-67e5960385a1" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:22 crc kubenswrapper[5118]: E1208 19:31:22.328367 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:22.828350023 +0000 UTC m=+135.121195470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.328294 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.328556 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:22 crc kubenswrapper[5118]: E1208 19:31:22.328913 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:22.828906509 +0000 UTC m=+135.121751966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.432417 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8rpxq"] Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.433231 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:22 crc kubenswrapper[5118]: E1208 19:31:22.433667 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:22.933645385 +0000 UTC m=+135.226490842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.472410 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j28hm" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.535588 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:22 crc kubenswrapper[5118]: E1208 19:31:22.536504 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:23.036487881 +0000 UTC m=+135.329333338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.634406 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzpzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:22 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 19:31:22 crc kubenswrapper[5118]: [+]process-running ok Dec 08 19:31:22 crc kubenswrapper[5118]: healthz check failed Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.634496 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podUID="90514180-5ed3-4eb5-b13e-cd3b90998a22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.637869 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:22 crc kubenswrapper[5118]: E1208 19:31:22.638665 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:23.138619518 +0000 UTC m=+135.431464965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.656187 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mlt4z"] Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.716976 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mlt4z"] Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.717201 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mlt4z" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.719779 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.741535 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14b81eee-396d-4e4e-a48c-87183aa677a0-catalog-content\") pod \"redhat-operators-mlt4z\" (UID: \"14b81eee-396d-4e4e-a48c-87183aa677a0\") " pod="openshift-marketplace/redhat-operators-mlt4z" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.741634 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.741702 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14b81eee-396d-4e4e-a48c-87183aa677a0-utilities\") pod \"redhat-operators-mlt4z\" (UID: \"14b81eee-396d-4e4e-a48c-87183aa677a0\") " pod="openshift-marketplace/redhat-operators-mlt4z" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.741756 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjmkp\" (UniqueName: \"kubernetes.io/projected/14b81eee-396d-4e4e-a48c-87183aa677a0-kube-api-access-gjmkp\") pod \"redhat-operators-mlt4z\" (UID: \"14b81eee-396d-4e4e-a48c-87183aa677a0\") " pod="openshift-marketplace/redhat-operators-mlt4z" Dec 08 19:31:22 crc kubenswrapper[5118]: E1208 19:31:22.742139 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:23.242120942 +0000 UTC m=+135.534966619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.842516 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.843047 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14b81eee-396d-4e4e-a48c-87183aa677a0-catalog-content\") pod \"redhat-operators-mlt4z\" (UID: \"14b81eee-396d-4e4e-a48c-87183aa677a0\") " pod="openshift-marketplace/redhat-operators-mlt4z" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.843088 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14b81eee-396d-4e4e-a48c-87183aa677a0-utilities\") pod \"redhat-operators-mlt4z\" (UID: \"14b81eee-396d-4e4e-a48c-87183aa677a0\") " pod="openshift-marketplace/redhat-operators-mlt4z" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.843119 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gjmkp\" (UniqueName: \"kubernetes.io/projected/14b81eee-396d-4e4e-a48c-87183aa677a0-kube-api-access-gjmkp\") pod \"redhat-operators-mlt4z\" (UID: \"14b81eee-396d-4e4e-a48c-87183aa677a0\") " pod="openshift-marketplace/redhat-operators-mlt4z" Dec 08 19:31:22 crc kubenswrapper[5118]: E1208 19:31:22.843935 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:23.343901209 +0000 UTC m=+135.636746676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.844151 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14b81eee-396d-4e4e-a48c-87183aa677a0-catalog-content\") pod \"redhat-operators-mlt4z\" (UID: \"14b81eee-396d-4e4e-a48c-87183aa677a0\") " pod="openshift-marketplace/redhat-operators-mlt4z" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.844292 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14b81eee-396d-4e4e-a48c-87183aa677a0-utilities\") pod \"redhat-operators-mlt4z\" (UID: \"14b81eee-396d-4e4e-a48c-87183aa677a0\") " pod="openshift-marketplace/redhat-operators-mlt4z" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.872196 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjmkp\" (UniqueName: \"kubernetes.io/projected/14b81eee-396d-4e4e-a48c-87183aa677a0-kube-api-access-gjmkp\") pod \"redhat-operators-mlt4z\" (UID: \"14b81eee-396d-4e4e-a48c-87183aa677a0\") " pod="openshift-marketplace/redhat-operators-mlt4z" Dec 08 19:31:22 crc kubenswrapper[5118]: I1208 19:31:22.944254 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:22 crc kubenswrapper[5118]: E1208 19:31:22.944931 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:23.444899596 +0000 UTC m=+135.737745233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.019820 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j28hm"] Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.039639 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gjccc" Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.044140 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w85rg"] Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.050005 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:23 crc kubenswrapper[5118]: E1208 19:31:23.052109 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:23.552083239 +0000 UTC m=+135.844928696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.052229 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mlt4z" Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.052460 5118 generic.go:358] "Generic (PLEG): container finished" podID="fdc926a9-b83b-4c7d-9558-98ab053066a1" containerID="25d67b8e7408d024f35388273ea1208862ef3239f6f0eaeb38868e7d7ef1e190" exitCode=0 Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.076882 5118 generic.go:358] "Generic (PLEG): container finished" podID="c7e11da8-7a5b-49b5-a421-678c6c8fc10e" containerID="822c2b623193bcd8945c3d0419b8ffeb98edc14917673a753b98bd1f9a9b4937" exitCode=0 Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.087138 5118 generic.go:358] "Generic (PLEG): container finished" podID="70414740-2872-4ebd-b3b5-ded149c0f019" containerID="898620bd70fd3a06031511716724a97b64124bf95677abd2479ecec31949844f" exitCode=0 Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.125372 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j28hm" event={"ID":"4d00dd31-7ee8-4424-946d-c67a1cbe55b7","Type":"ContainerStarted","Data":"cf45c01130ba714a4d8fc082ccd7af7a5100a287ddea50c91dd2da7d2be29cc3"} Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.125574 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qrgm" event={"ID":"fdc926a9-b83b-4c7d-9558-98ab053066a1","Type":"ContainerDied","Data":"25d67b8e7408d024f35388273ea1208862ef3239f6f0eaeb38868e7d7ef1e190"} Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.125627 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w85rg"] Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.125645 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"02df487c-5002-42fc-940c-02d7df55f614","Type":"ContainerStarted","Data":"a0a76bc4f29203ae59733bb5004be290978ddbfb38d58086cb7694c086b255b1"} Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.125914 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8htv" event={"ID":"c7e11da8-7a5b-49b5-a421-678c6c8fc10e","Type":"ContainerDied","Data":"822c2b623193bcd8945c3d0419b8ffeb98edc14917673a753b98bd1f9a9b4937"} Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.125933 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rpxq" event={"ID":"70414740-2872-4ebd-b3b5-ded149c0f019","Type":"ContainerDied","Data":"898620bd70fd3a06031511716724a97b64124bf95677abd2479ecec31949844f"} Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.125972 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rpxq" event={"ID":"70414740-2872-4ebd-b3b5-ded149c0f019","Type":"ContainerStarted","Data":"81cd30d77282e220f1aafdf889c356d3f8bb59a0a553eb344ee7f6809cc7ae25"} Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.126153 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w85rg" Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.151706 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8588582f-a24f-452b-8770-a5d9533724c0-utilities\") pod \"redhat-operators-w85rg\" (UID: \"8588582f-a24f-452b-8770-a5d9533724c0\") " pod="openshift-marketplace/redhat-operators-w85rg" Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.151834 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.152086 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6mz4\" (UniqueName: \"kubernetes.io/projected/8588582f-a24f-452b-8770-a5d9533724c0-kube-api-access-m6mz4\") pod \"redhat-operators-w85rg\" (UID: \"8588582f-a24f-452b-8770-a5d9533724c0\") " pod="openshift-marketplace/redhat-operators-w85rg" Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.152239 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8588582f-a24f-452b-8770-a5d9533724c0-catalog-content\") pod \"redhat-operators-w85rg\" (UID: \"8588582f-a24f-452b-8770-a5d9533724c0\") " pod="openshift-marketplace/redhat-operators-w85rg" Dec 08 19:31:23 crc kubenswrapper[5118]: E1208 19:31:23.152270 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:23.652254203 +0000 UTC m=+135.945099730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.231746 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=3.2317307 podStartE2EDuration="3.2317307s" podCreationTimestamp="2025-12-08 19:31:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:23.223643464 +0000 UTC m=+135.516488931" watchObservedRunningTime="2025-12-08 19:31:23.2317307 +0000 UTC m=+135.524576147" Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.258279 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.258470 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8588582f-a24f-452b-8770-a5d9533724c0-catalog-content\") pod \"redhat-operators-w85rg\" (UID: \"8588582f-a24f-452b-8770-a5d9533724c0\") " pod="openshift-marketplace/redhat-operators-w85rg" Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.258533 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8588582f-a24f-452b-8770-a5d9533724c0-utilities\") pod \"redhat-operators-w85rg\" (UID: \"8588582f-a24f-452b-8770-a5d9533724c0\") " pod="openshift-marketplace/redhat-operators-w85rg" Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.258592 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m6mz4\" (UniqueName: \"kubernetes.io/projected/8588582f-a24f-452b-8770-a5d9533724c0-kube-api-access-m6mz4\") pod \"redhat-operators-w85rg\" (UID: \"8588582f-a24f-452b-8770-a5d9533724c0\") " pod="openshift-marketplace/redhat-operators-w85rg" Dec 08 19:31:23 crc kubenswrapper[5118]: E1208 19:31:23.259228 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:23.7592012 +0000 UTC m=+136.052046657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.260303 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8588582f-a24f-452b-8770-a5d9533724c0-utilities\") pod \"redhat-operators-w85rg\" (UID: \"8588582f-a24f-452b-8770-a5d9533724c0\") " pod="openshift-marketplace/redhat-operators-w85rg" Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.260345 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8588582f-a24f-452b-8770-a5d9533724c0-catalog-content\") pod \"redhat-operators-w85rg\" (UID: \"8588582f-a24f-452b-8770-a5d9533724c0\") " pod="openshift-marketplace/redhat-operators-w85rg" Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.285376 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6mz4\" (UniqueName: \"kubernetes.io/projected/8588582f-a24f-452b-8770-a5d9533724c0-kube-api-access-m6mz4\") pod \"redhat-operators-w85rg\" (UID: \"8588582f-a24f-452b-8770-a5d9533724c0\") " pod="openshift-marketplace/redhat-operators-w85rg" Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.360424 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:23 crc kubenswrapper[5118]: E1208 19:31:23.361000 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:23.860983067 +0000 UTC m=+136.153828524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.394411 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mlt4z"] Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.456996 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w85rg" Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.462255 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:23 crc kubenswrapper[5118]: E1208 19:31:23.462468 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:23.962432116 +0000 UTC m=+136.255277593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.463193 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:23 crc kubenswrapper[5118]: E1208 19:31:23.463631 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:23.963621288 +0000 UTC m=+136.256466745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.564740 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:23 crc kubenswrapper[5118]: E1208 19:31:23.564837 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:24.064818779 +0000 UTC m=+136.357664226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.564965 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:23 crc kubenswrapper[5118]: E1208 19:31:23.565295 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:24.065288243 +0000 UTC m=+136.358133700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.628258 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzpzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:23 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 19:31:23 crc kubenswrapper[5118]: [+]process-running ok Dec 08 19:31:23 crc kubenswrapper[5118]: healthz check failed Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.628586 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podUID="90514180-5ed3-4eb5-b13e-cd3b90998a22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.667106 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:23 crc kubenswrapper[5118]: E1208 19:31:23.667299 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:24.167266356 +0000 UTC m=+136.460111833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.667590 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:23 crc kubenswrapper[5118]: E1208 19:31:23.668073 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:24.168058086 +0000 UTC m=+136.460903543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.768530 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:23 crc kubenswrapper[5118]: E1208 19:31:23.768796 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:24.268777415 +0000 UTC m=+136.561622872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.829455 5118 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.870817 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:23 crc kubenswrapper[5118]: E1208 19:31:23.871322 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:24.371300383 +0000 UTC m=+136.664146040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.948837 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w85rg"] Dec 08 19:31:23 crc kubenswrapper[5118]: E1208 19:31:23.956525 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c033c7b04383d901acf213784b3fe0ae6796ec0b77af27e121920822a889e388" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:23 crc kubenswrapper[5118]: E1208 19:31:23.965063 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c033c7b04383d901acf213784b3fe0ae6796ec0b77af27e121920822a889e388" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:23 crc kubenswrapper[5118]: I1208 19:31:23.972507 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:23 crc kubenswrapper[5118]: E1208 19:31:23.972888 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:24.472869555 +0000 UTC m=+136.765715012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:23 crc kubenswrapper[5118]: E1208 19:31:23.989096 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c033c7b04383d901acf213784b3fe0ae6796ec0b77af27e121920822a889e388" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:23 crc kubenswrapper[5118]: E1208 19:31:23.989177 5118 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" podUID="574501a5-bb4b-4c42-9046-e00bc9447f56" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.073891 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:24 crc kubenswrapper[5118]: E1208 19:31:24.074568 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:24.574547759 +0000 UTC m=+136.867393276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.105227 5118 generic.go:358] "Generic (PLEG): container finished" podID="4d00dd31-7ee8-4424-946d-c67a1cbe55b7" containerID="eb18f54e3761560f02f7e6f079a03b87cf41cf78566d069df63ec3d0ba4cdfae" exitCode=0 Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.105300 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j28hm" event={"ID":"4d00dd31-7ee8-4424-946d-c67a1cbe55b7","Type":"ContainerDied","Data":"eb18f54e3761560f02f7e6f079a03b87cf41cf78566d069df63ec3d0ba4cdfae"} Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.110450 5118 generic.go:358] "Generic (PLEG): container finished" podID="14b81eee-396d-4e4e-a48c-87183aa677a0" containerID="f8f1dd839ba4fafcbbf531ee7bb3452522db6ae9b64c52f4afc318bacd83c4fa" exitCode=0 Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.110575 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlt4z" event={"ID":"14b81eee-396d-4e4e-a48c-87183aa677a0","Type":"ContainerDied","Data":"f8f1dd839ba4fafcbbf531ee7bb3452522db6ae9b64c52f4afc318bacd83c4fa"} Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.110595 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlt4z" event={"ID":"14b81eee-396d-4e4e-a48c-87183aa677a0","Type":"ContainerStarted","Data":"b1b9f756116f7471aa1f88875d48f24e4842021641f718a0d0cc5a66845596b0"} Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.114439 5118 generic.go:358] "Generic (PLEG): container finished" podID="02df487c-5002-42fc-940c-02d7df55f614" containerID="a0a76bc4f29203ae59733bb5004be290978ddbfb38d58086cb7694c086b255b1" exitCode=0 Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.114645 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"02df487c-5002-42fc-940c-02d7df55f614","Type":"ContainerDied","Data":"a0a76bc4f29203ae59733bb5004be290978ddbfb38d58086cb7694c086b255b1"} Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.116427 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w85rg" event={"ID":"8588582f-a24f-452b-8770-a5d9533724c0","Type":"ContainerStarted","Data":"be41098bb1e015880ecbd6f4331a33ff0d2604319fa574b5a44cda6f8288df95"} Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.135648 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" event={"ID":"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6","Type":"ContainerStarted","Data":"f405d3fa7f8d839819fba112b9991b94a54cbe73d8b59dd6de0a1a7cfed4c799"} Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.176238 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:24 crc kubenswrapper[5118]: E1208 19:31:24.176403 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:24.676376528 +0000 UTC m=+136.969221985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.177063 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:24 crc kubenswrapper[5118]: E1208 19:31:24.177392 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:24.677382925 +0000 UTC m=+136.970228382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-k49rf" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.237073 5118 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-08T19:31:23.829499668Z","UUID":"ece26906-bd2e-4ee4-aaf6-b1252f211699","Handler":null,"Name":"","Endpoint":""} Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.246768 5118 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.246793 5118 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.277929 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.285760 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.379396 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.389553 5118 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.389590 5118 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.433446 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-k49rf\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.628834 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzpzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:24 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 19:31:24 crc kubenswrapper[5118]: [+]process-running ok Dec 08 19:31:24 crc kubenswrapper[5118]: healthz check failed Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.628895 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podUID="90514180-5ed3-4eb5-b13e-cd3b90998a22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.636770 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.644067 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.644605 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.646595 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.647109 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.684212 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/186873a7-acc0-4e1b-9013-e906ad994b3b-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"186873a7-acc0-4e1b-9013-e906ad994b3b\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.684545 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/186873a7-acc0-4e1b-9013-e906ad994b3b-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"186873a7-acc0-4e1b-9013-e906ad994b3b\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.742132 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.750484 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.790262 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/186873a7-acc0-4e1b-9013-e906ad994b3b-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"186873a7-acc0-4e1b-9013-e906ad994b3b\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.790349 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/186873a7-acc0-4e1b-9013-e906ad994b3b-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"186873a7-acc0-4e1b-9013-e906ad994b3b\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.790419 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/186873a7-acc0-4e1b-9013-e906ad994b3b-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"186873a7-acc0-4e1b-9013-e906ad994b3b\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.825050 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/186873a7-acc0-4e1b-9013-e906ad994b3b-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"186873a7-acc0-4e1b-9013-e906ad994b3b\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:24 crc kubenswrapper[5118]: I1208 19:31:24.976565 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:25 crc kubenswrapper[5118]: I1208 19:31:25.195222 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-k49rf"] Dec 08 19:31:25 crc kubenswrapper[5118]: I1208 19:31:25.206244 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" event={"ID":"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6","Type":"ContainerStarted","Data":"f6acbcda4c458abb23471bc3f5cac1b034b5ee1d791b363cd86167ac7e9bc463"} Dec 08 19:31:25 crc kubenswrapper[5118]: I1208 19:31:25.206317 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" event={"ID":"58cb15f0-81cf-46ab-8c99-afa4fd7a67d6","Type":"ContainerStarted","Data":"f2a6dd5ebf2c000717944bb7527be9ee8c1f670d425fe72b19430d1cdb841604"} Dec 08 19:31:25 crc kubenswrapper[5118]: I1208 19:31:25.224290 5118 generic.go:358] "Generic (PLEG): container finished" podID="ab666d86-db2b-4489-a868-8d24159ea775" containerID="8c8f9eeeb059046cd25de131c14b026dab52645d33eb93b95e12da3991b31a32" exitCode=0 Dec 08 19:31:25 crc kubenswrapper[5118]: I1208 19:31:25.225391 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5" event={"ID":"ab666d86-db2b-4489-a868-8d24159ea775","Type":"ContainerDied","Data":"8c8f9eeeb059046cd25de131c14b026dab52645d33eb93b95e12da3991b31a32"} Dec 08 19:31:25 crc kubenswrapper[5118]: I1208 19:31:25.241980 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xc9vh" podStartSLOduration=19.241966168 podStartE2EDuration="19.241966168s" podCreationTimestamp="2025-12-08 19:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:25.240985272 +0000 UTC m=+137.533830739" watchObservedRunningTime="2025-12-08 19:31:25.241966168 +0000 UTC m=+137.534811625" Dec 08 19:31:25 crc kubenswrapper[5118]: I1208 19:31:25.263509 5118 generic.go:358] "Generic (PLEG): container finished" podID="8588582f-a24f-452b-8770-a5d9533724c0" containerID="c93fa358b6a11b6c43de050dcef5add1aaa7e623aeb61b1fb5b00c7673d4148c" exitCode=0 Dec 08 19:31:25 crc kubenswrapper[5118]: I1208 19:31:25.263901 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w85rg" event={"ID":"8588582f-a24f-452b-8770-a5d9533724c0","Type":"ContainerDied","Data":"c93fa358b6a11b6c43de050dcef5add1aaa7e623aeb61b1fb5b00c7673d4148c"} Dec 08 19:31:25 crc kubenswrapper[5118]: I1208 19:31:25.277399 5118 ???:1] "http: TLS handshake error from 192.168.126.11:49802: no serving certificate available for the kubelet" Dec 08 19:31:25 crc kubenswrapper[5118]: I1208 19:31:25.435641 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 19:31:25 crc kubenswrapper[5118]: W1208 19:31:25.514322 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod186873a7_acc0_4e1b_9013_e906ad994b3b.slice/crio-3b3770c405e77604496bebef35cd018e3e1c7dd1e8eeb8f383f8796d430738da WatchSource:0}: Error finding container 3b3770c405e77604496bebef35cd018e3e1c7dd1e8eeb8f383f8796d430738da: Status 404 returned error can't find the container with id 3b3770c405e77604496bebef35cd018e3e1c7dd1e8eeb8f383f8796d430738da Dec 08 19:31:25 crc kubenswrapper[5118]: I1208 19:31:25.689290 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzpzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:25 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 19:31:25 crc kubenswrapper[5118]: [+]process-running ok Dec 08 19:31:25 crc kubenswrapper[5118]: healthz check failed Dec 08 19:31:25 crc kubenswrapper[5118]: I1208 19:31:25.689372 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podUID="90514180-5ed3-4eb5-b13e-cd3b90998a22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:25 crc kubenswrapper[5118]: I1208 19:31:25.733241 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:25 crc kubenswrapper[5118]: I1208 19:31:25.833437 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02df487c-5002-42fc-940c-02d7df55f614-kube-api-access\") pod \"02df487c-5002-42fc-940c-02d7df55f614\" (UID: \"02df487c-5002-42fc-940c-02d7df55f614\") " Dec 08 19:31:25 crc kubenswrapper[5118]: I1208 19:31:25.833777 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02df487c-5002-42fc-940c-02d7df55f614-kubelet-dir\") pod \"02df487c-5002-42fc-940c-02d7df55f614\" (UID: \"02df487c-5002-42fc-940c-02d7df55f614\") " Dec 08 19:31:25 crc kubenswrapper[5118]: I1208 19:31:25.834121 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02df487c-5002-42fc-940c-02d7df55f614-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "02df487c-5002-42fc-940c-02d7df55f614" (UID: "02df487c-5002-42fc-940c-02d7df55f614"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:31:25 crc kubenswrapper[5118]: I1208 19:31:25.856902 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02df487c-5002-42fc-940c-02d7df55f614-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "02df487c-5002-42fc-940c-02d7df55f614" (UID: "02df487c-5002-42fc-940c-02d7df55f614"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:25 crc kubenswrapper[5118]: I1208 19:31:25.935952 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02df487c-5002-42fc-940c-02d7df55f614-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:25 crc kubenswrapper[5118]: I1208 19:31:25.935992 5118 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02df487c-5002-42fc-940c-02d7df55f614-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.114788 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.278903 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"186873a7-acc0-4e1b-9013-e906ad994b3b","Type":"ContainerStarted","Data":"3b3770c405e77604496bebef35cd018e3e1c7dd1e8eeb8f383f8796d430738da"} Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.280653 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-k49rf" event={"ID":"5a7dc4f4-9762-4968-b509-c2ee68240e9b","Type":"ContainerStarted","Data":"35e71916328b5ddf865ad73e3cbd75ade7d5eabe95aba64d542e2031fb0e8097"} Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.280681 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-k49rf" event={"ID":"5a7dc4f4-9762-4968-b509-c2ee68240e9b","Type":"ContainerStarted","Data":"0a95173a440d45d9fb9572353a443c43833845d61847f9d3764583475aa7a2e0"} Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.281781 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.283949 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.283980 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"02df487c-5002-42fc-940c-02d7df55f614","Type":"ContainerDied","Data":"acd56b01c07bc83cda9a50e3a4f0443360c016636955a12a431a06dd55f298fb"} Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.284005 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acd56b01c07bc83cda9a50e3a4f0443360c016636955a12a431a06dd55f298fb" Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.609076 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5" Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.631661 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-k49rf" podStartSLOduration=118.631638176 podStartE2EDuration="1m58.631638176s" podCreationTimestamp="2025-12-08 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:26.318239497 +0000 UTC m=+138.611084964" watchObservedRunningTime="2025-12-08 19:31:26.631638176 +0000 UTC m=+138.924483633" Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.632969 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzpzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:26 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 19:31:26 crc kubenswrapper[5118]: [+]process-running ok Dec 08 19:31:26 crc kubenswrapper[5118]: healthz check failed Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.633045 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podUID="90514180-5ed3-4eb5-b13e-cd3b90998a22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.649459 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab666d86-db2b-4489-a868-8d24159ea775-secret-volume\") pod \"ab666d86-db2b-4489-a868-8d24159ea775\" (UID: \"ab666d86-db2b-4489-a868-8d24159ea775\") " Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.649550 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab666d86-db2b-4489-a868-8d24159ea775-config-volume\") pod \"ab666d86-db2b-4489-a868-8d24159ea775\" (UID: \"ab666d86-db2b-4489-a868-8d24159ea775\") " Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.649940 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bstcl\" (UniqueName: \"kubernetes.io/projected/ab666d86-db2b-4489-a868-8d24159ea775-kube-api-access-bstcl\") pod \"ab666d86-db2b-4489-a868-8d24159ea775\" (UID: \"ab666d86-db2b-4489-a868-8d24159ea775\") " Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.651215 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab666d86-db2b-4489-a868-8d24159ea775-config-volume" (OuterVolumeSpecName: "config-volume") pod "ab666d86-db2b-4489-a868-8d24159ea775" (UID: "ab666d86-db2b-4489-a868-8d24159ea775"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.676879 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab666d86-db2b-4489-a868-8d24159ea775-kube-api-access-bstcl" (OuterVolumeSpecName: "kube-api-access-bstcl") pod "ab666d86-db2b-4489-a868-8d24159ea775" (UID: "ab666d86-db2b-4489-a868-8d24159ea775"). InnerVolumeSpecName "kube-api-access-bstcl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.694102 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab666d86-db2b-4489-a868-8d24159ea775-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ab666d86-db2b-4489-a868-8d24159ea775" (UID: "ab666d86-db2b-4489-a868-8d24159ea775"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.758426 5118 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab666d86-db2b-4489-a868-8d24159ea775-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.758461 5118 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab666d86-db2b-4489-a868-8d24159ea775-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.758471 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bstcl\" (UniqueName: \"kubernetes.io/projected/ab666d86-db2b-4489-a868-8d24159ea775-kube-api-access-bstcl\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.814508 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:26 crc kubenswrapper[5118]: I1208 19:31:26.828355 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-8vsfg" Dec 08 19:31:27 crc kubenswrapper[5118]: I1208 19:31:27.310815 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5" event={"ID":"ab666d86-db2b-4489-a868-8d24159ea775","Type":"ContainerDied","Data":"b1dd1e6529e6d884f5849ddd6bdb2a77cfb55582e9b77db3cb34a1e97de8eb98"} Dec 08 19:31:27 crc kubenswrapper[5118]: I1208 19:31:27.310880 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1dd1e6529e6d884f5849ddd6bdb2a77cfb55582e9b77db3cb34a1e97de8eb98" Dec 08 19:31:27 crc kubenswrapper[5118]: I1208 19:31:27.311065 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-s24t5" Dec 08 19:31:27 crc kubenswrapper[5118]: I1208 19:31:27.327148 5118 generic.go:358] "Generic (PLEG): container finished" podID="186873a7-acc0-4e1b-9013-e906ad994b3b" containerID="ff1abdc62616771c15107a3125241c9d1c6bd648e68c1d464e68516b3a61ac0a" exitCode=0 Dec 08 19:31:27 crc kubenswrapper[5118]: I1208 19:31:27.329013 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"186873a7-acc0-4e1b-9013-e906ad994b3b","Type":"ContainerDied","Data":"ff1abdc62616771c15107a3125241c9d1c6bd648e68c1d464e68516b3a61ac0a"} Dec 08 19:31:27 crc kubenswrapper[5118]: I1208 19:31:27.633073 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzpzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:27 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 19:31:27 crc kubenswrapper[5118]: [+]process-running ok Dec 08 19:31:27 crc kubenswrapper[5118]: healthz check failed Dec 08 19:31:27 crc kubenswrapper[5118]: I1208 19:31:27.633167 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podUID="90514180-5ed3-4eb5-b13e-cd3b90998a22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:28 crc kubenswrapper[5118]: I1208 19:31:28.638656 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzpzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:28 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 19:31:28 crc kubenswrapper[5118]: [+]process-running ok Dec 08 19:31:28 crc kubenswrapper[5118]: healthz check failed Dec 08 19:31:28 crc kubenswrapper[5118]: I1208 19:31:28.639398 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podUID="90514180-5ed3-4eb5-b13e-cd3b90998a22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:29 crc kubenswrapper[5118]: I1208 19:31:29.625468 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzpzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:29 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 19:31:29 crc kubenswrapper[5118]: [+]process-running ok Dec 08 19:31:29 crc kubenswrapper[5118]: healthz check failed Dec 08 19:31:29 crc kubenswrapper[5118]: I1208 19:31:29.626025 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podUID="90514180-5ed3-4eb5-b13e-cd3b90998a22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:30 crc kubenswrapper[5118]: I1208 19:31:30.627293 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzpzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:30 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 19:31:30 crc kubenswrapper[5118]: [+]process-running ok Dec 08 19:31:30 crc kubenswrapper[5118]: healthz check failed Dec 08 19:31:30 crc kubenswrapper[5118]: I1208 19:31:30.627393 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podUID="90514180-5ed3-4eb5-b13e-cd3b90998a22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:30 crc kubenswrapper[5118]: I1208 19:31:30.827498 5118 patch_prober.go:28] interesting pod/downloads-747b44746d-qnl9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Dec 08 19:31:30 crc kubenswrapper[5118]: I1208 19:31:30.827578 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-qnl9q" podUID="86f2d26a-630b-4a98-9dc3-c1ec245d7b6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Dec 08 19:31:31 crc kubenswrapper[5118]: I1208 19:31:31.627183 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzpzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:31 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 19:31:31 crc kubenswrapper[5118]: [+]process-running ok Dec 08 19:31:31 crc kubenswrapper[5118]: healthz check failed Dec 08 19:31:31 crc kubenswrapper[5118]: I1208 19:31:31.628999 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podUID="90514180-5ed3-4eb5-b13e-cd3b90998a22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:32 crc kubenswrapper[5118]: I1208 19:31:32.082713 5118 patch_prober.go:28] interesting pod/downloads-747b44746d-qnl9q container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Dec 08 19:31:32 crc kubenswrapper[5118]: I1208 19:31:32.082785 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-qnl9q" podUID="86f2d26a-630b-4a98-9dc3-c1ec245d7b6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Dec 08 19:31:32 crc kubenswrapper[5118]: I1208 19:31:32.170815 5118 patch_prober.go:28] interesting pod/console-64d44f6ddf-hxwm8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 08 19:31:32 crc kubenswrapper[5118]: I1208 19:31:32.170876 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-hxwm8" podUID="db584c29-faf0-48cd-ac87-3af21a6fcbe4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 08 19:31:32 crc kubenswrapper[5118]: I1208 19:31:32.625165 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzpzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:32 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 19:31:32 crc kubenswrapper[5118]: [+]process-running ok Dec 08 19:31:32 crc kubenswrapper[5118]: healthz check failed Dec 08 19:31:32 crc kubenswrapper[5118]: I1208 19:31:32.625259 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" podUID="90514180-5ed3-4eb5-b13e-cd3b90998a22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:32 crc kubenswrapper[5118]: I1208 19:31:32.688271 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:31:33 crc kubenswrapper[5118]: I1208 19:31:33.625323 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:33 crc kubenswrapper[5118]: I1208 19:31:33.629069 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-vzpzx" Dec 08 19:31:33 crc kubenswrapper[5118]: E1208 19:31:33.954824 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c033c7b04383d901acf213784b3fe0ae6796ec0b77af27e121920822a889e388" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:33 crc kubenswrapper[5118]: E1208 19:31:33.957312 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c033c7b04383d901acf213784b3fe0ae6796ec0b77af27e121920822a889e388" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:33 crc kubenswrapper[5118]: E1208 19:31:33.958614 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c033c7b04383d901acf213784b3fe0ae6796ec0b77af27e121920822a889e388" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:33 crc kubenswrapper[5118]: E1208 19:31:33.958660 5118 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" podUID="574501a5-bb4b-4c42-9046-e00bc9447f56" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 19:31:35 crc kubenswrapper[5118]: I1208 19:31:35.305824 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:35 crc kubenswrapper[5118]: I1208 19:31:35.340942 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/186873a7-acc0-4e1b-9013-e906ad994b3b-kube-api-access\") pod \"186873a7-acc0-4e1b-9013-e906ad994b3b\" (UID: \"186873a7-acc0-4e1b-9013-e906ad994b3b\") " Dec 08 19:31:35 crc kubenswrapper[5118]: I1208 19:31:35.341005 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/186873a7-acc0-4e1b-9013-e906ad994b3b-kubelet-dir\") pod \"186873a7-acc0-4e1b-9013-e906ad994b3b\" (UID: \"186873a7-acc0-4e1b-9013-e906ad994b3b\") " Dec 08 19:31:35 crc kubenswrapper[5118]: I1208 19:31:35.341334 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/186873a7-acc0-4e1b-9013-e906ad994b3b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "186873a7-acc0-4e1b-9013-e906ad994b3b" (UID: "186873a7-acc0-4e1b-9013-e906ad994b3b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:31:35 crc kubenswrapper[5118]: I1208 19:31:35.348676 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/186873a7-acc0-4e1b-9013-e906ad994b3b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "186873a7-acc0-4e1b-9013-e906ad994b3b" (UID: "186873a7-acc0-4e1b-9013-e906ad994b3b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:35 crc kubenswrapper[5118]: I1208 19:31:35.398893 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"186873a7-acc0-4e1b-9013-e906ad994b3b","Type":"ContainerDied","Data":"3b3770c405e77604496bebef35cd018e3e1c7dd1e8eeb8f383f8796d430738da"} Dec 08 19:31:35 crc kubenswrapper[5118]: I1208 19:31:35.398937 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:35 crc kubenswrapper[5118]: I1208 19:31:35.398965 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b3770c405e77604496bebef35cd018e3e1c7dd1e8eeb8f383f8796d430738da" Dec 08 19:31:35 crc kubenswrapper[5118]: I1208 19:31:35.442346 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/186873a7-acc0-4e1b-9013-e906ad994b3b-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:35 crc kubenswrapper[5118]: I1208 19:31:35.442393 5118 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/186873a7-acc0-4e1b-9013-e906ad994b3b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:35 crc kubenswrapper[5118]: I1208 19:31:35.574232 5118 ???:1] "http: TLS handshake error from 192.168.126.11:57650: no serving certificate available for the kubelet" Dec 08 19:31:40 crc kubenswrapper[5118]: I1208 19:31:40.446483 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cs27m" event={"ID":"9801ce4f-e9bf-4c09-a624-81675bbda6fa","Type":"ContainerStarted","Data":"d39b494440ddad6c0c6d5b35616617c3710515bf3432c08a8565dafa38546bf5"} Dec 08 19:31:40 crc kubenswrapper[5118]: I1208 19:31:40.452071 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j28hm" event={"ID":"4d00dd31-7ee8-4424-946d-c67a1cbe55b7","Type":"ContainerStarted","Data":"0aff11a6a74916ee14ce8b03ad0e15db4a2fb90eb28632a644805588e408913b"} Dec 08 19:31:40 crc kubenswrapper[5118]: I1208 19:31:40.462290 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlt4z" event={"ID":"14b81eee-396d-4e4e-a48c-87183aa677a0","Type":"ContainerStarted","Data":"38b61a0262fa7fc94b844881e09c85dddbd382f5c3d1f341b31d4ef0abdf1cee"} Dec 08 19:31:40 crc kubenswrapper[5118]: I1208 19:31:40.465815 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qrgm" event={"ID":"fdc926a9-b83b-4c7d-9558-98ab053066a1","Type":"ContainerStarted","Data":"e8facb851eae01012f9298095f2f527cd8b7d23c31dd7b9e8393982ee6fec333"} Dec 08 19:31:40 crc kubenswrapper[5118]: I1208 19:31:40.478985 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8htv" event={"ID":"c7e11da8-7a5b-49b5-a421-678c6c8fc10e","Type":"ContainerStarted","Data":"3b33ad5327ab74eae550ce25daa36b32cebab8a691de0dc756016300e68842f3"} Dec 08 19:31:40 crc kubenswrapper[5118]: I1208 19:31:40.489903 5118 generic.go:358] "Generic (PLEG): container finished" podID="70414740-2872-4ebd-b3b5-ded149c0f019" containerID="629a6e05d4deaf42e44711007fa5fdb2583b87e8cb1fcd8ca3e8156ed75df6be" exitCode=0 Dec 08 19:31:40 crc kubenswrapper[5118]: I1208 19:31:40.490404 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rpxq" event={"ID":"70414740-2872-4ebd-b3b5-ded149c0f019","Type":"ContainerDied","Data":"629a6e05d4deaf42e44711007fa5fdb2583b87e8cb1fcd8ca3e8156ed75df6be"} Dec 08 19:31:40 crc kubenswrapper[5118]: I1208 19:31:40.494680 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w85rg" event={"ID":"8588582f-a24f-452b-8770-a5d9533724c0","Type":"ContainerStarted","Data":"a0e5db72a2533258dfd53ed346b0d8da8d71a49857a433a4d5259699e8554e9f"} Dec 08 19:31:40 crc kubenswrapper[5118]: I1208 19:31:40.498150 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7px9v" event={"ID":"6d799616-15c0-4e4f-8cbb-5f33d9f607ef","Type":"ContainerStarted","Data":"8ad5bd2c6eb14bd89234308b4521e2fa30f9b9aabd234f54bb9f3133827e90e9"} Dec 08 19:31:40 crc kubenswrapper[5118]: I1208 19:31:40.841372 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-qnl9q" Dec 08 19:31:41 crc kubenswrapper[5118]: I1208 19:31:41.505835 5118 generic.go:358] "Generic (PLEG): container finished" podID="9801ce4f-e9bf-4c09-a624-81675bbda6fa" containerID="d39b494440ddad6c0c6d5b35616617c3710515bf3432c08a8565dafa38546bf5" exitCode=0 Dec 08 19:31:41 crc kubenswrapper[5118]: I1208 19:31:41.505910 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cs27m" event={"ID":"9801ce4f-e9bf-4c09-a624-81675bbda6fa","Type":"ContainerDied","Data":"d39b494440ddad6c0c6d5b35616617c3710515bf3432c08a8565dafa38546bf5"} Dec 08 19:31:41 crc kubenswrapper[5118]: I1208 19:31:41.509537 5118 generic.go:358] "Generic (PLEG): container finished" podID="4d00dd31-7ee8-4424-946d-c67a1cbe55b7" containerID="0aff11a6a74916ee14ce8b03ad0e15db4a2fb90eb28632a644805588e408913b" exitCode=0 Dec 08 19:31:41 crc kubenswrapper[5118]: I1208 19:31:41.509596 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j28hm" event={"ID":"4d00dd31-7ee8-4424-946d-c67a1cbe55b7","Type":"ContainerDied","Data":"0aff11a6a74916ee14ce8b03ad0e15db4a2fb90eb28632a644805588e408913b"} Dec 08 19:31:41 crc kubenswrapper[5118]: I1208 19:31:41.512382 5118 generic.go:358] "Generic (PLEG): container finished" podID="14b81eee-396d-4e4e-a48c-87183aa677a0" containerID="38b61a0262fa7fc94b844881e09c85dddbd382f5c3d1f341b31d4ef0abdf1cee" exitCode=0 Dec 08 19:31:41 crc kubenswrapper[5118]: I1208 19:31:41.512554 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlt4z" event={"ID":"14b81eee-396d-4e4e-a48c-87183aa677a0","Type":"ContainerDied","Data":"38b61a0262fa7fc94b844881e09c85dddbd382f5c3d1f341b31d4ef0abdf1cee"} Dec 08 19:31:41 crc kubenswrapper[5118]: I1208 19:31:41.514993 5118 generic.go:358] "Generic (PLEG): container finished" podID="fdc926a9-b83b-4c7d-9558-98ab053066a1" containerID="e8facb851eae01012f9298095f2f527cd8b7d23c31dd7b9e8393982ee6fec333" exitCode=0 Dec 08 19:31:41 crc kubenswrapper[5118]: I1208 19:31:41.515089 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qrgm" event={"ID":"fdc926a9-b83b-4c7d-9558-98ab053066a1","Type":"ContainerDied","Data":"e8facb851eae01012f9298095f2f527cd8b7d23c31dd7b9e8393982ee6fec333"} Dec 08 19:31:41 crc kubenswrapper[5118]: I1208 19:31:41.517152 5118 generic.go:358] "Generic (PLEG): container finished" podID="c7e11da8-7a5b-49b5-a421-678c6c8fc10e" containerID="3b33ad5327ab74eae550ce25daa36b32cebab8a691de0dc756016300e68842f3" exitCode=0 Dec 08 19:31:41 crc kubenswrapper[5118]: I1208 19:31:41.517398 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8htv" event={"ID":"c7e11da8-7a5b-49b5-a421-678c6c8fc10e","Type":"ContainerDied","Data":"3b33ad5327ab74eae550ce25daa36b32cebab8a691de0dc756016300e68842f3"} Dec 08 19:31:41 crc kubenswrapper[5118]: I1208 19:31:41.520737 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w85rg" event={"ID":"8588582f-a24f-452b-8770-a5d9533724c0","Type":"ContainerDied","Data":"a0e5db72a2533258dfd53ed346b0d8da8d71a49857a433a4d5259699e8554e9f"} Dec 08 19:31:41 crc kubenswrapper[5118]: I1208 19:31:41.521573 5118 generic.go:358] "Generic (PLEG): container finished" podID="8588582f-a24f-452b-8770-a5d9533724c0" containerID="a0e5db72a2533258dfd53ed346b0d8da8d71a49857a433a4d5259699e8554e9f" exitCode=0 Dec 08 19:31:41 crc kubenswrapper[5118]: I1208 19:31:41.526224 5118 generic.go:358] "Generic (PLEG): container finished" podID="6d799616-15c0-4e4f-8cbb-5f33d9f607ef" containerID="8ad5bd2c6eb14bd89234308b4521e2fa30f9b9aabd234f54bb9f3133827e90e9" exitCode=0 Dec 08 19:31:41 crc kubenswrapper[5118]: I1208 19:31:41.526751 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7px9v" event={"ID":"6d799616-15c0-4e4f-8cbb-5f33d9f607ef","Type":"ContainerDied","Data":"8ad5bd2c6eb14bd89234308b4521e2fa30f9b9aabd234f54bb9f3133827e90e9"} Dec 08 19:31:42 crc kubenswrapper[5118]: I1208 19:31:42.177404 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:42 crc kubenswrapper[5118]: I1208 19:31:42.535984 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlt4z" event={"ID":"14b81eee-396d-4e4e-a48c-87183aa677a0","Type":"ContainerStarted","Data":"348a2fc5fa3c4684add4d9aabae8590595a98431a23aec1ce43541b26493abd4"} Dec 08 19:31:42 crc kubenswrapper[5118]: I1208 19:31:42.538132 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8htv" event={"ID":"c7e11da8-7a5b-49b5-a421-678c6c8fc10e","Type":"ContainerStarted","Data":"fd7e7aaab4de65d9c1186a1d6c878323a4a21b1bc21619e3b0feef24709e0e7e"} Dec 08 19:31:42 crc kubenswrapper[5118]: I1208 19:31:42.540337 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rpxq" event={"ID":"70414740-2872-4ebd-b3b5-ded149c0f019","Type":"ContainerStarted","Data":"956962fd5a16ec4a7380033ef397e59d0176c52c75a9967ee9e5af106effc88e"} Dec 08 19:31:42 crc kubenswrapper[5118]: I1208 19:31:42.948954 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-hxwm8" Dec 08 19:31:42 crc kubenswrapper[5118]: I1208 19:31:42.982404 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8rpxq" podStartSLOduration=5.236286002 podStartE2EDuration="21.982385309s" podCreationTimestamp="2025-12-08 19:31:21 +0000 UTC" firstStartedPulling="2025-12-08 19:31:23.128578757 +0000 UTC m=+135.421424204" lastFinishedPulling="2025-12-08 19:31:39.874678044 +0000 UTC m=+152.167523511" observedRunningTime="2025-12-08 19:31:42.980957291 +0000 UTC m=+155.273802758" watchObservedRunningTime="2025-12-08 19:31:42.982385309 +0000 UTC m=+155.275230756" Dec 08 19:31:43 crc kubenswrapper[5118]: I1208 19:31:43.559440 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cs27m" event={"ID":"9801ce4f-e9bf-4c09-a624-81675bbda6fa","Type":"ContainerStarted","Data":"9dd6827ac05bd43f580f9f92a65b548d17fdbdf9b71940625fa81d5bfe12f109"} Dec 08 19:31:43 crc kubenswrapper[5118]: I1208 19:31:43.562175 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j28hm" event={"ID":"4d00dd31-7ee8-4424-946d-c67a1cbe55b7","Type":"ContainerStarted","Data":"7cab224a10eb43563f7b55f6a0b160f9eceb79e7ef0c12fd96b0ff700aed82e2"} Dec 08 19:31:43 crc kubenswrapper[5118]: I1208 19:31:43.564621 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qrgm" event={"ID":"fdc926a9-b83b-4c7d-9558-98ab053066a1","Type":"ContainerStarted","Data":"f15b502f4976cf81b10e925de2accf919c18179d5923dd8b94d3688a4bcff2cf"} Dec 08 19:31:43 crc kubenswrapper[5118]: I1208 19:31:43.754407 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t8htv" podStartSLOduration=7.905839159 podStartE2EDuration="24.754389403s" podCreationTimestamp="2025-12-08 19:31:19 +0000 UTC" firstStartedPulling="2025-12-08 19:31:23.133310673 +0000 UTC m=+135.426156130" lastFinishedPulling="2025-12-08 19:31:39.981860917 +0000 UTC m=+152.274706374" observedRunningTime="2025-12-08 19:31:43.749903122 +0000 UTC m=+156.042748599" watchObservedRunningTime="2025-12-08 19:31:43.754389403 +0000 UTC m=+156.047234870" Dec 08 19:31:43 crc kubenswrapper[5118]: I1208 19:31:43.780184 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mlt4z" podStartSLOduration=5.900495202 podStartE2EDuration="21.780149185s" podCreationTimestamp="2025-12-08 19:31:22 +0000 UTC" firstStartedPulling="2025-12-08 19:31:24.111247837 +0000 UTC m=+136.404093294" lastFinishedPulling="2025-12-08 19:31:39.99090182 +0000 UTC m=+152.283747277" observedRunningTime="2025-12-08 19:31:43.77919601 +0000 UTC m=+156.072041467" watchObservedRunningTime="2025-12-08 19:31:43.780149185 +0000 UTC m=+156.072994642" Dec 08 19:31:43 crc kubenswrapper[5118]: I1208 19:31:43.827076 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cs27m" podStartSLOduration=6.971460078 podStartE2EDuration="24.827047077s" podCreationTimestamp="2025-12-08 19:31:19 +0000 UTC" firstStartedPulling="2025-12-08 19:31:22.094078432 +0000 UTC m=+134.386923889" lastFinishedPulling="2025-12-08 19:31:39.949665431 +0000 UTC m=+152.242510888" observedRunningTime="2025-12-08 19:31:43.800736539 +0000 UTC m=+156.093582006" watchObservedRunningTime="2025-12-08 19:31:43.827047077 +0000 UTC m=+156.119892554" Dec 08 19:31:43 crc kubenswrapper[5118]: I1208 19:31:43.827996 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j28hm" podStartSLOduration=6.973049084 podStartE2EDuration="22.827987323s" podCreationTimestamp="2025-12-08 19:31:21 +0000 UTC" firstStartedPulling="2025-12-08 19:31:24.105944104 +0000 UTC m=+136.398789561" lastFinishedPulling="2025-12-08 19:31:39.960882343 +0000 UTC m=+152.253727800" observedRunningTime="2025-12-08 19:31:43.823426209 +0000 UTC m=+156.116271666" watchObservedRunningTime="2025-12-08 19:31:43.827987323 +0000 UTC m=+156.120832790" Dec 08 19:31:43 crc kubenswrapper[5118]: E1208 19:31:43.954182 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c033c7b04383d901acf213784b3fe0ae6796ec0b77af27e121920822a889e388" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:43 crc kubenswrapper[5118]: E1208 19:31:43.956279 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c033c7b04383d901acf213784b3fe0ae6796ec0b77af27e121920822a889e388" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:43 crc kubenswrapper[5118]: E1208 19:31:43.957805 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c033c7b04383d901acf213784b3fe0ae6796ec0b77af27e121920822a889e388" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:43 crc kubenswrapper[5118]: E1208 19:31:43.957859 5118 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" podUID="574501a5-bb4b-4c42-9046-e00bc9447f56" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 19:31:44 crc kubenswrapper[5118]: I1208 19:31:44.572538 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w85rg" event={"ID":"8588582f-a24f-452b-8770-a5d9533724c0","Type":"ContainerStarted","Data":"e0a140a8db590d3c4321b58477b77f65382aa694aa670f3b99b2ff1f32bfc163"} Dec 08 19:31:44 crc kubenswrapper[5118]: I1208 19:31:44.575673 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7px9v" event={"ID":"6d799616-15c0-4e4f-8cbb-5f33d9f607ef","Type":"ContainerStarted","Data":"ef174d642a498f3fd4f351f5d82e46cf8e375768e0ee1465453b73b288f6514a"} Dec 08 19:31:44 crc kubenswrapper[5118]: I1208 19:31:44.595597 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w85rg" podStartSLOduration=6.819747933 podStartE2EDuration="21.595582038s" podCreationTimestamp="2025-12-08 19:31:23 +0000 UTC" firstStartedPulling="2025-12-08 19:31:25.265346458 +0000 UTC m=+137.558191915" lastFinishedPulling="2025-12-08 19:31:40.041180542 +0000 UTC m=+152.334026020" observedRunningTime="2025-12-08 19:31:44.592821094 +0000 UTC m=+156.885666551" watchObservedRunningTime="2025-12-08 19:31:44.595582038 +0000 UTC m=+156.888427495" Dec 08 19:31:45 crc kubenswrapper[5118]: I1208 19:31:45.602404 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5qrgm" podStartSLOduration=8.686891193 podStartE2EDuration="25.602365807s" podCreationTimestamp="2025-12-08 19:31:20 +0000 UTC" firstStartedPulling="2025-12-08 19:31:23.053853366 +0000 UTC m=+135.346699283" lastFinishedPulling="2025-12-08 19:31:39.96932844 +0000 UTC m=+152.262173897" observedRunningTime="2025-12-08 19:31:44.617010834 +0000 UTC m=+156.909856291" watchObservedRunningTime="2025-12-08 19:31:45.602365807 +0000 UTC m=+157.895211264" Dec 08 19:31:45 crc kubenswrapper[5118]: I1208 19:31:45.606401 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7px9v" podStartSLOduration=8.742108682 podStartE2EDuration="26.606380705s" podCreationTimestamp="2025-12-08 19:31:19 +0000 UTC" firstStartedPulling="2025-12-08 19:31:22.09625636 +0000 UTC m=+134.389101817" lastFinishedPulling="2025-12-08 19:31:39.960528383 +0000 UTC m=+152.253373840" observedRunningTime="2025-12-08 19:31:45.601703488 +0000 UTC m=+157.894548955" watchObservedRunningTime="2025-12-08 19:31:45.606380705 +0000 UTC m=+157.899226162" Dec 08 19:31:48 crc kubenswrapper[5118]: I1208 19:31:48.343251 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:31:49 crc kubenswrapper[5118]: I1208 19:31:49.604001 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-rxwj8_574501a5-bb4b-4c42-9046-e00bc9447f56/kube-multus-additional-cni-plugins/0.log" Dec 08 19:31:49 crc kubenswrapper[5118]: I1208 19:31:49.604046 5118 generic.go:358] "Generic (PLEG): container finished" podID="574501a5-bb4b-4c42-9046-e00bc9447f56" containerID="c033c7b04383d901acf213784b3fe0ae6796ec0b77af27e121920822a889e388" exitCode=137 Dec 08 19:31:49 crc kubenswrapper[5118]: I1208 19:31:49.604259 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" event={"ID":"574501a5-bb4b-4c42-9046-e00bc9447f56","Type":"ContainerDied","Data":"c033c7b04383d901acf213784b3fe0ae6796ec0b77af27e121920822a889e388"} Dec 08 19:31:49 crc kubenswrapper[5118]: I1208 19:31:49.867101 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-7px9v" Dec 08 19:31:49 crc kubenswrapper[5118]: I1208 19:31:49.867374 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7px9v" Dec 08 19:31:49 crc kubenswrapper[5118]: I1208 19:31:49.873171 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-rxwj8_574501a5-bb4b-4c42-9046-e00bc9447f56/kube-multus-additional-cni-plugins/0.log" Dec 08 19:31:49 crc kubenswrapper[5118]: I1208 19:31:49.873236 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" Dec 08 19:31:49 crc kubenswrapper[5118]: I1208 19:31:49.920018 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/574501a5-bb4b-4c42-9046-e00bc9447f56-cni-sysctl-allowlist\") pod \"574501a5-bb4b-4c42-9046-e00bc9447f56\" (UID: \"574501a5-bb4b-4c42-9046-e00bc9447f56\") " Dec 08 19:31:49 crc kubenswrapper[5118]: I1208 19:31:49.920117 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/574501a5-bb4b-4c42-9046-e00bc9447f56-tuning-conf-dir\") pod \"574501a5-bb4b-4c42-9046-e00bc9447f56\" (UID: \"574501a5-bb4b-4c42-9046-e00bc9447f56\") " Dec 08 19:31:49 crc kubenswrapper[5118]: I1208 19:31:49.920174 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/574501a5-bb4b-4c42-9046-e00bc9447f56-ready\") pod \"574501a5-bb4b-4c42-9046-e00bc9447f56\" (UID: \"574501a5-bb4b-4c42-9046-e00bc9447f56\") " Dec 08 19:31:49 crc kubenswrapper[5118]: I1208 19:31:49.920279 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/574501a5-bb4b-4c42-9046-e00bc9447f56-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "574501a5-bb4b-4c42-9046-e00bc9447f56" (UID: "574501a5-bb4b-4c42-9046-e00bc9447f56"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:31:49 crc kubenswrapper[5118]: I1208 19:31:49.920362 5118 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/574501a5-bb4b-4c42-9046-e00bc9447f56-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:49 crc kubenswrapper[5118]: I1208 19:31:49.921060 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/574501a5-bb4b-4c42-9046-e00bc9447f56-ready" (OuterVolumeSpecName: "ready") pod "574501a5-bb4b-4c42-9046-e00bc9447f56" (UID: "574501a5-bb4b-4c42-9046-e00bc9447f56"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:49 crc kubenswrapper[5118]: I1208 19:31:49.921544 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/574501a5-bb4b-4c42-9046-e00bc9447f56-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "574501a5-bb4b-4c42-9046-e00bc9447f56" (UID: "574501a5-bb4b-4c42-9046-e00bc9447f56"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:31:49 crc kubenswrapper[5118]: I1208 19:31:49.929472 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7px9v" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.021169 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlqfv\" (UniqueName: \"kubernetes.io/projected/574501a5-bb4b-4c42-9046-e00bc9447f56-kube-api-access-jlqfv\") pod \"574501a5-bb4b-4c42-9046-e00bc9447f56\" (UID: \"574501a5-bb4b-4c42-9046-e00bc9447f56\") " Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.021544 5118 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/574501a5-bb4b-4c42-9046-e00bc9447f56-ready\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.021565 5118 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/574501a5-bb4b-4c42-9046-e00bc9447f56-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.028379 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/574501a5-bb4b-4c42-9046-e00bc9447f56-kube-api-access-jlqfv" (OuterVolumeSpecName: "kube-api-access-jlqfv") pod "574501a5-bb4b-4c42-9046-e00bc9447f56" (UID: "574501a5-bb4b-4c42-9046-e00bc9447f56"). InnerVolumeSpecName "kube-api-access-jlqfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.054751 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-cs27m" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.054817 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cs27m" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.087861 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cs27m" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.123059 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jlqfv\" (UniqueName: \"kubernetes.io/projected/574501a5-bb4b-4c42-9046-e00bc9447f56-kube-api-access-jlqfv\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.340501 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t8htv" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.340564 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-t8htv" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.395399 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t8htv" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.413140 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5qrgm" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.413221 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-5qrgm" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.471563 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5qrgm" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.612616 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-rxwj8_574501a5-bb4b-4c42-9046-e00bc9447f56/kube-multus-additional-cni-plugins/0.log" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.612799 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" event={"ID":"574501a5-bb4b-4c42-9046-e00bc9447f56","Type":"ContainerDied","Data":"1ebf6179ada0786e0cc843832b9e3e7df51baffd0425df229248586b60c2c903"} Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.612833 5118 scope.go:117] "RemoveContainer" containerID="c033c7b04383d901acf213784b3fe0ae6796ec0b77af27e121920822a889e388" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.613508 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rxwj8" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.641016 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rxwj8"] Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.651189 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rxwj8"] Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.668021 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t8htv" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.674757 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7px9v" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.683664 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cs27m" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.686365 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5qrgm" Dec 08 19:31:50 crc kubenswrapper[5118]: I1208 19:31:50.832763 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pkb44" Dec 08 19:31:51 crc kubenswrapper[5118]: I1208 19:31:51.282752 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t8htv"] Dec 08 19:31:51 crc kubenswrapper[5118]: I1208 19:31:51.404445 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:31:51 crc kubenswrapper[5118]: I1208 19:31:51.946420 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8rpxq" Dec 08 19:31:51 crc kubenswrapper[5118]: I1208 19:31:51.946484 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-8rpxq" Dec 08 19:31:51 crc kubenswrapper[5118]: I1208 19:31:51.993030 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8rpxq" Dec 08 19:31:52 crc kubenswrapper[5118]: I1208 19:31:52.102980 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="574501a5-bb4b-4c42-9046-e00bc9447f56" path="/var/lib/kubelet/pods/574501a5-bb4b-4c42-9046-e00bc9447f56/volumes" Dec 08 19:31:52 crc kubenswrapper[5118]: I1208 19:31:52.281030 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5qrgm"] Dec 08 19:31:52 crc kubenswrapper[5118]: I1208 19:31:52.473925 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-j28hm" Dec 08 19:31:52 crc kubenswrapper[5118]: I1208 19:31:52.473974 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j28hm" Dec 08 19:31:52 crc kubenswrapper[5118]: I1208 19:31:52.515145 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j28hm" Dec 08 19:31:52 crc kubenswrapper[5118]: I1208 19:31:52.626509 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5qrgm" podUID="fdc926a9-b83b-4c7d-9558-98ab053066a1" containerName="registry-server" containerID="cri-o://f15b502f4976cf81b10e925de2accf919c18179d5923dd8b94d3688a4bcff2cf" gracePeriod=2 Dec 08 19:31:52 crc kubenswrapper[5118]: I1208 19:31:52.627049 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t8htv" podUID="c7e11da8-7a5b-49b5-a421-678c6c8fc10e" containerName="registry-server" containerID="cri-o://fd7e7aaab4de65d9c1186a1d6c878323a4a21b1bc21619e3b0feef24709e0e7e" gracePeriod=2 Dec 08 19:31:52 crc kubenswrapper[5118]: I1208 19:31:52.663990 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8rpxq" Dec 08 19:31:52 crc kubenswrapper[5118]: I1208 19:31:52.666460 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j28hm" Dec 08 19:31:53 crc kubenswrapper[5118]: I1208 19:31:53.053768 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-mlt4z" Dec 08 19:31:53 crc kubenswrapper[5118]: I1208 19:31:53.054421 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mlt4z" Dec 08 19:31:53 crc kubenswrapper[5118]: I1208 19:31:53.087905 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mlt4z" Dec 08 19:31:53 crc kubenswrapper[5118]: I1208 19:31:53.458000 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w85rg" Dec 08 19:31:53 crc kubenswrapper[5118]: I1208 19:31:53.458075 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-w85rg" Dec 08 19:31:53 crc kubenswrapper[5118]: I1208 19:31:53.498406 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w85rg" Dec 08 19:31:53 crc kubenswrapper[5118]: I1208 19:31:53.671898 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w85rg" Dec 08 19:31:53 crc kubenswrapper[5118]: I1208 19:31:53.674402 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mlt4z" Dec 08 19:31:54 crc kubenswrapper[5118]: I1208 19:31:54.680842 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j28hm"] Dec 08 19:31:54 crc kubenswrapper[5118]: I1208 19:31:54.681127 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j28hm" podUID="4d00dd31-7ee8-4424-946d-c67a1cbe55b7" containerName="registry-server" containerID="cri-o://7cab224a10eb43563f7b55f6a0b160f9eceb79e7ef0c12fd96b0ff700aed82e2" gracePeriod=2 Dec 08 19:31:55 crc kubenswrapper[5118]: I1208 19:31:55.654131 5118 generic.go:358] "Generic (PLEG): container finished" podID="4d00dd31-7ee8-4424-946d-c67a1cbe55b7" containerID="7cab224a10eb43563f7b55f6a0b160f9eceb79e7ef0c12fd96b0ff700aed82e2" exitCode=0 Dec 08 19:31:55 crc kubenswrapper[5118]: I1208 19:31:55.654781 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j28hm" event={"ID":"4d00dd31-7ee8-4424-946d-c67a1cbe55b7","Type":"ContainerDied","Data":"7cab224a10eb43563f7b55f6a0b160f9eceb79e7ef0c12fd96b0ff700aed82e2"} Dec 08 19:31:55 crc kubenswrapper[5118]: I1208 19:31:55.657397 5118 generic.go:358] "Generic (PLEG): container finished" podID="fdc926a9-b83b-4c7d-9558-98ab053066a1" containerID="f15b502f4976cf81b10e925de2accf919c18179d5923dd8b94d3688a4bcff2cf" exitCode=0 Dec 08 19:31:55 crc kubenswrapper[5118]: I1208 19:31:55.657564 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qrgm" event={"ID":"fdc926a9-b83b-4c7d-9558-98ab053066a1","Type":"ContainerDied","Data":"f15b502f4976cf81b10e925de2accf919c18179d5923dd8b94d3688a4bcff2cf"} Dec 08 19:31:55 crc kubenswrapper[5118]: I1208 19:31:55.660613 5118 generic.go:358] "Generic (PLEG): container finished" podID="c7e11da8-7a5b-49b5-a421-678c6c8fc10e" containerID="fd7e7aaab4de65d9c1186a1d6c878323a4a21b1bc21619e3b0feef24709e0e7e" exitCode=0 Dec 08 19:31:55 crc kubenswrapper[5118]: I1208 19:31:55.660703 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8htv" event={"ID":"c7e11da8-7a5b-49b5-a421-678c6c8fc10e","Type":"ContainerDied","Data":"fd7e7aaab4de65d9c1186a1d6c878323a4a21b1bc21619e3b0feef24709e0e7e"} Dec 08 19:31:55 crc kubenswrapper[5118]: I1208 19:31:55.967498 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8htv" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.009988 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7e11da8-7a5b-49b5-a421-678c6c8fc10e-utilities\") pod \"c7e11da8-7a5b-49b5-a421-678c6c8fc10e\" (UID: \"c7e11da8-7a5b-49b5-a421-678c6c8fc10e\") " Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.010130 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln8cm\" (UniqueName: \"kubernetes.io/projected/c7e11da8-7a5b-49b5-a421-678c6c8fc10e-kube-api-access-ln8cm\") pod \"c7e11da8-7a5b-49b5-a421-678c6c8fc10e\" (UID: \"c7e11da8-7a5b-49b5-a421-678c6c8fc10e\") " Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.010159 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7e11da8-7a5b-49b5-a421-678c6c8fc10e-catalog-content\") pod \"c7e11da8-7a5b-49b5-a421-678c6c8fc10e\" (UID: \"c7e11da8-7a5b-49b5-a421-678c6c8fc10e\") " Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.010958 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7e11da8-7a5b-49b5-a421-678c6c8fc10e-utilities" (OuterVolumeSpecName: "utilities") pod "c7e11da8-7a5b-49b5-a421-678c6c8fc10e" (UID: "c7e11da8-7a5b-49b5-a421-678c6c8fc10e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.036024 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e11da8-7a5b-49b5-a421-678c6c8fc10e-kube-api-access-ln8cm" (OuterVolumeSpecName: "kube-api-access-ln8cm") pod "c7e11da8-7a5b-49b5-a421-678c6c8fc10e" (UID: "c7e11da8-7a5b-49b5-a421-678c6c8fc10e"). InnerVolumeSpecName "kube-api-access-ln8cm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.042032 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7e11da8-7a5b-49b5-a421-678c6c8fc10e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7e11da8-7a5b-49b5-a421-678c6c8fc10e" (UID: "c7e11da8-7a5b-49b5-a421-678c6c8fc10e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.078013 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qrgm" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.079023 5118 ???:1] "http: TLS handshake error from 192.168.126.11:47896: no serving certificate available for the kubelet" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.111039 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdc926a9-b83b-4c7d-9558-98ab053066a1-catalog-content\") pod \"fdc926a9-b83b-4c7d-9558-98ab053066a1\" (UID: \"fdc926a9-b83b-4c7d-9558-98ab053066a1\") " Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.111116 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j47x\" (UniqueName: \"kubernetes.io/projected/fdc926a9-b83b-4c7d-9558-98ab053066a1-kube-api-access-5j47x\") pod \"fdc926a9-b83b-4c7d-9558-98ab053066a1\" (UID: \"fdc926a9-b83b-4c7d-9558-98ab053066a1\") " Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.111207 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdc926a9-b83b-4c7d-9558-98ab053066a1-utilities\") pod \"fdc926a9-b83b-4c7d-9558-98ab053066a1\" (UID: \"fdc926a9-b83b-4c7d-9558-98ab053066a1\") " Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.111467 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ln8cm\" (UniqueName: \"kubernetes.io/projected/c7e11da8-7a5b-49b5-a421-678c6c8fc10e-kube-api-access-ln8cm\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.111485 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7e11da8-7a5b-49b5-a421-678c6c8fc10e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.111498 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7e11da8-7a5b-49b5-a421-678c6c8fc10e-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.112438 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdc926a9-b83b-4c7d-9558-98ab053066a1-utilities" (OuterVolumeSpecName: "utilities") pod "fdc926a9-b83b-4c7d-9558-98ab053066a1" (UID: "fdc926a9-b83b-4c7d-9558-98ab053066a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.116203 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc926a9-b83b-4c7d-9558-98ab053066a1-kube-api-access-5j47x" (OuterVolumeSpecName: "kube-api-access-5j47x") pod "fdc926a9-b83b-4c7d-9558-98ab053066a1" (UID: "fdc926a9-b83b-4c7d-9558-98ab053066a1"). InnerVolumeSpecName "kube-api-access-5j47x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.162549 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdc926a9-b83b-4c7d-9558-98ab053066a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdc926a9-b83b-4c7d-9558-98ab053066a1" (UID: "fdc926a9-b83b-4c7d-9558-98ab053066a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.200385 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j28hm" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.212966 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdc926a9-b83b-4c7d-9558-98ab053066a1-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.212991 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdc926a9-b83b-4c7d-9558-98ab053066a1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.213000 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5j47x\" (UniqueName: \"kubernetes.io/projected/fdc926a9-b83b-4c7d-9558-98ab053066a1-kube-api-access-5j47x\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.313631 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df6df\" (UniqueName: \"kubernetes.io/projected/4d00dd31-7ee8-4424-946d-c67a1cbe55b7-kube-api-access-df6df\") pod \"4d00dd31-7ee8-4424-946d-c67a1cbe55b7\" (UID: \"4d00dd31-7ee8-4424-946d-c67a1cbe55b7\") " Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.313773 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d00dd31-7ee8-4424-946d-c67a1cbe55b7-catalog-content\") pod \"4d00dd31-7ee8-4424-946d-c67a1cbe55b7\" (UID: \"4d00dd31-7ee8-4424-946d-c67a1cbe55b7\") " Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.313858 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d00dd31-7ee8-4424-946d-c67a1cbe55b7-utilities\") pod \"4d00dd31-7ee8-4424-946d-c67a1cbe55b7\" (UID: \"4d00dd31-7ee8-4424-946d-c67a1cbe55b7\") " Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.315150 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d00dd31-7ee8-4424-946d-c67a1cbe55b7-utilities" (OuterVolumeSpecName: "utilities") pod "4d00dd31-7ee8-4424-946d-c67a1cbe55b7" (UID: "4d00dd31-7ee8-4424-946d-c67a1cbe55b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.318448 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d00dd31-7ee8-4424-946d-c67a1cbe55b7-kube-api-access-df6df" (OuterVolumeSpecName: "kube-api-access-df6df") pod "4d00dd31-7ee8-4424-946d-c67a1cbe55b7" (UID: "4d00dd31-7ee8-4424-946d-c67a1cbe55b7"). InnerVolumeSpecName "kube-api-access-df6df". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.326664 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d00dd31-7ee8-4424-946d-c67a1cbe55b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d00dd31-7ee8-4424-946d-c67a1cbe55b7" (UID: "4d00dd31-7ee8-4424-946d-c67a1cbe55b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.416268 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d00dd31-7ee8-4424-946d-c67a1cbe55b7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.416335 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d00dd31-7ee8-4424-946d-c67a1cbe55b7-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.416346 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-df6df\" (UniqueName: \"kubernetes.io/projected/4d00dd31-7ee8-4424-946d-c67a1cbe55b7-kube-api-access-df6df\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.669417 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8htv" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.669410 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8htv" event={"ID":"c7e11da8-7a5b-49b5-a421-678c6c8fc10e","Type":"ContainerDied","Data":"973dfe9827d648c3ab46b89f6c850d965307522b1d44727e1a75689499bda12c"} Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.669625 5118 scope.go:117] "RemoveContainer" containerID="fd7e7aaab4de65d9c1186a1d6c878323a4a21b1bc21619e3b0feef24709e0e7e" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.674070 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j28hm" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.674061 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j28hm" event={"ID":"4d00dd31-7ee8-4424-946d-c67a1cbe55b7","Type":"ContainerDied","Data":"cf45c01130ba714a4d8fc082ccd7af7a5100a287ddea50c91dd2da7d2be29cc3"} Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.679619 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qrgm" event={"ID":"fdc926a9-b83b-4c7d-9558-98ab053066a1","Type":"ContainerDied","Data":"0da51729f669d267f79c5f2d60d3a62b48cb3decf8bd5fa07706b364dff02185"} Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.679789 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qrgm" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.692453 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t8htv"] Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.693465 5118 scope.go:117] "RemoveContainer" containerID="3b33ad5327ab74eae550ce25daa36b32cebab8a691de0dc756016300e68842f3" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.697170 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t8htv"] Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.721753 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j28hm"] Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.721817 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j28hm"] Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.725929 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5qrgm"] Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.729003 5118 scope.go:117] "RemoveContainer" containerID="822c2b623193bcd8945c3d0419b8ffeb98edc14917673a753b98bd1f9a9b4937" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.729596 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5qrgm"] Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.752199 5118 scope.go:117] "RemoveContainer" containerID="7cab224a10eb43563f7b55f6a0b160f9eceb79e7ef0c12fd96b0ff700aed82e2" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.768790 5118 scope.go:117] "RemoveContainer" containerID="0aff11a6a74916ee14ce8b03ad0e15db4a2fb90eb28632a644805588e408913b" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.784329 5118 scope.go:117] "RemoveContainer" containerID="eb18f54e3761560f02f7e6f079a03b87cf41cf78566d069df63ec3d0ba4cdfae" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.797819 5118 scope.go:117] "RemoveContainer" containerID="f15b502f4976cf81b10e925de2accf919c18179d5923dd8b94d3688a4bcff2cf" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.812741 5118 scope.go:117] "RemoveContainer" containerID="e8facb851eae01012f9298095f2f527cd8b7d23c31dd7b9e8393982ee6fec333" Dec 08 19:31:56 crc kubenswrapper[5118]: I1208 19:31:56.828236 5118 scope.go:117] "RemoveContainer" containerID="25d67b8e7408d024f35388273ea1208862ef3239f6f0eaeb38868e7d7ef1e190" Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.079410 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w85rg"] Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.079656 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w85rg" podUID="8588582f-a24f-452b-8770-a5d9533724c0" containerName="registry-server" containerID="cri-o://e0a140a8db590d3c4321b58477b77f65382aa694aa670f3b99b2ff1f32bfc163" gracePeriod=2 Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.567343 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w85rg" Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.634368 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8588582f-a24f-452b-8770-a5d9533724c0-catalog-content\") pod \"8588582f-a24f-452b-8770-a5d9533724c0\" (UID: \"8588582f-a24f-452b-8770-a5d9533724c0\") " Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.634486 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6mz4\" (UniqueName: \"kubernetes.io/projected/8588582f-a24f-452b-8770-a5d9533724c0-kube-api-access-m6mz4\") pod \"8588582f-a24f-452b-8770-a5d9533724c0\" (UID: \"8588582f-a24f-452b-8770-a5d9533724c0\") " Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.634517 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8588582f-a24f-452b-8770-a5d9533724c0-utilities\") pod \"8588582f-a24f-452b-8770-a5d9533724c0\" (UID: \"8588582f-a24f-452b-8770-a5d9533724c0\") " Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.635975 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8588582f-a24f-452b-8770-a5d9533724c0-utilities" (OuterVolumeSpecName: "utilities") pod "8588582f-a24f-452b-8770-a5d9533724c0" (UID: "8588582f-a24f-452b-8770-a5d9533724c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.643242 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8588582f-a24f-452b-8770-a5d9533724c0-kube-api-access-m6mz4" (OuterVolumeSpecName: "kube-api-access-m6mz4") pod "8588582f-a24f-452b-8770-a5d9533724c0" (UID: "8588582f-a24f-452b-8770-a5d9533724c0"). InnerVolumeSpecName "kube-api-access-m6mz4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.691101 5118 generic.go:358] "Generic (PLEG): container finished" podID="8588582f-a24f-452b-8770-a5d9533724c0" containerID="e0a140a8db590d3c4321b58477b77f65382aa694aa670f3b99b2ff1f32bfc163" exitCode=0 Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.691270 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w85rg" event={"ID":"8588582f-a24f-452b-8770-a5d9533724c0","Type":"ContainerDied","Data":"e0a140a8db590d3c4321b58477b77f65382aa694aa670f3b99b2ff1f32bfc163"} Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.691312 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w85rg" event={"ID":"8588582f-a24f-452b-8770-a5d9533724c0","Type":"ContainerDied","Data":"be41098bb1e015880ecbd6f4331a33ff0d2604319fa574b5a44cda6f8288df95"} Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.691342 5118 scope.go:117] "RemoveContainer" containerID="e0a140a8db590d3c4321b58477b77f65382aa694aa670f3b99b2ff1f32bfc163" Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.691403 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w85rg" Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.707769 5118 scope.go:117] "RemoveContainer" containerID="a0e5db72a2533258dfd53ed346b0d8da8d71a49857a433a4d5259699e8554e9f" Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.730409 5118 scope.go:117] "RemoveContainer" containerID="c93fa358b6a11b6c43de050dcef5add1aaa7e623aeb61b1fb5b00c7673d4148c" Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.735078 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8588582f-a24f-452b-8770-a5d9533724c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8588582f-a24f-452b-8770-a5d9533724c0" (UID: "8588582f-a24f-452b-8770-a5d9533724c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.735926 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m6mz4\" (UniqueName: \"kubernetes.io/projected/8588582f-a24f-452b-8770-a5d9533724c0-kube-api-access-m6mz4\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.735970 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8588582f-a24f-452b-8770-a5d9533724c0-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.735990 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8588582f-a24f-452b-8770-a5d9533724c0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.748129 5118 scope.go:117] "RemoveContainer" containerID="e0a140a8db590d3c4321b58477b77f65382aa694aa670f3b99b2ff1f32bfc163" Dec 08 19:31:57 crc kubenswrapper[5118]: E1208 19:31:57.748771 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0a140a8db590d3c4321b58477b77f65382aa694aa670f3b99b2ff1f32bfc163\": container with ID starting with e0a140a8db590d3c4321b58477b77f65382aa694aa670f3b99b2ff1f32bfc163 not found: ID does not exist" containerID="e0a140a8db590d3c4321b58477b77f65382aa694aa670f3b99b2ff1f32bfc163" Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.748809 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a140a8db590d3c4321b58477b77f65382aa694aa670f3b99b2ff1f32bfc163"} err="failed to get container status \"e0a140a8db590d3c4321b58477b77f65382aa694aa670f3b99b2ff1f32bfc163\": rpc error: code = NotFound desc = could not find container \"e0a140a8db590d3c4321b58477b77f65382aa694aa670f3b99b2ff1f32bfc163\": container with ID starting with e0a140a8db590d3c4321b58477b77f65382aa694aa670f3b99b2ff1f32bfc163 not found: ID does not exist" Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.748847 5118 scope.go:117] "RemoveContainer" containerID="a0e5db72a2533258dfd53ed346b0d8da8d71a49857a433a4d5259699e8554e9f" Dec 08 19:31:57 crc kubenswrapper[5118]: E1208 19:31:57.749366 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0e5db72a2533258dfd53ed346b0d8da8d71a49857a433a4d5259699e8554e9f\": container with ID starting with a0e5db72a2533258dfd53ed346b0d8da8d71a49857a433a4d5259699e8554e9f not found: ID does not exist" containerID="a0e5db72a2533258dfd53ed346b0d8da8d71a49857a433a4d5259699e8554e9f" Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.749481 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0e5db72a2533258dfd53ed346b0d8da8d71a49857a433a4d5259699e8554e9f"} err="failed to get container status \"a0e5db72a2533258dfd53ed346b0d8da8d71a49857a433a4d5259699e8554e9f\": rpc error: code = NotFound desc = could not find container \"a0e5db72a2533258dfd53ed346b0d8da8d71a49857a433a4d5259699e8554e9f\": container with ID starting with a0e5db72a2533258dfd53ed346b0d8da8d71a49857a433a4d5259699e8554e9f not found: ID does not exist" Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.749579 5118 scope.go:117] "RemoveContainer" containerID="c93fa358b6a11b6c43de050dcef5add1aaa7e623aeb61b1fb5b00c7673d4148c" Dec 08 19:31:57 crc kubenswrapper[5118]: E1208 19:31:57.750477 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c93fa358b6a11b6c43de050dcef5add1aaa7e623aeb61b1fb5b00c7673d4148c\": container with ID starting with c93fa358b6a11b6c43de050dcef5add1aaa7e623aeb61b1fb5b00c7673d4148c not found: ID does not exist" containerID="c93fa358b6a11b6c43de050dcef5add1aaa7e623aeb61b1fb5b00c7673d4148c" Dec 08 19:31:57 crc kubenswrapper[5118]: I1208 19:31:57.750548 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c93fa358b6a11b6c43de050dcef5add1aaa7e623aeb61b1fb5b00c7673d4148c"} err="failed to get container status \"c93fa358b6a11b6c43de050dcef5add1aaa7e623aeb61b1fb5b00c7673d4148c\": rpc error: code = NotFound desc = could not find container \"c93fa358b6a11b6c43de050dcef5add1aaa7e623aeb61b1fb5b00c7673d4148c\": container with ID starting with c93fa358b6a11b6c43de050dcef5add1aaa7e623aeb61b1fb5b00c7673d4148c not found: ID does not exist" Dec 08 19:31:58 crc kubenswrapper[5118]: I1208 19:31:58.025861 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w85rg"] Dec 08 19:31:58 crc kubenswrapper[5118]: I1208 19:31:58.030199 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w85rg"] Dec 08 19:31:58 crc kubenswrapper[5118]: I1208 19:31:58.103484 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d00dd31-7ee8-4424-946d-c67a1cbe55b7" path="/var/lib/kubelet/pods/4d00dd31-7ee8-4424-946d-c67a1cbe55b7/volumes" Dec 08 19:31:58 crc kubenswrapper[5118]: I1208 19:31:58.104253 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8588582f-a24f-452b-8770-a5d9533724c0" path="/var/lib/kubelet/pods/8588582f-a24f-452b-8770-a5d9533724c0/volumes" Dec 08 19:31:58 crc kubenswrapper[5118]: I1208 19:31:58.104997 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e11da8-7a5b-49b5-a421-678c6c8fc10e" path="/var/lib/kubelet/pods/c7e11da8-7a5b-49b5-a421-678c6c8fc10e/volumes" Dec 08 19:31:58 crc kubenswrapper[5118]: I1208 19:31:58.106394 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdc926a9-b83b-4c7d-9558-98ab053066a1" path="/var/lib/kubelet/pods/fdc926a9-b83b-4c7d-9558-98ab053066a1/volumes" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.633034 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634744 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4d00dd31-7ee8-4424-946d-c67a1cbe55b7" containerName="extract-content" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634763 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d00dd31-7ee8-4424-946d-c67a1cbe55b7" containerName="extract-content" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634775 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="574501a5-bb4b-4c42-9046-e00bc9447f56" containerName="kube-multus-additional-cni-plugins" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634783 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="574501a5-bb4b-4c42-9046-e00bc9447f56" containerName="kube-multus-additional-cni-plugins" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634792 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7e11da8-7a5b-49b5-a421-678c6c8fc10e" containerName="extract-content" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634799 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e11da8-7a5b-49b5-a421-678c6c8fc10e" containerName="extract-content" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634807 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4d00dd31-7ee8-4424-946d-c67a1cbe55b7" containerName="registry-server" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634813 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d00dd31-7ee8-4424-946d-c67a1cbe55b7" containerName="registry-server" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634825 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ab666d86-db2b-4489-a868-8d24159ea775" containerName="collect-profiles" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634831 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab666d86-db2b-4489-a868-8d24159ea775" containerName="collect-profiles" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634840 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4d00dd31-7ee8-4424-946d-c67a1cbe55b7" containerName="extract-utilities" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634847 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d00dd31-7ee8-4424-946d-c67a1cbe55b7" containerName="extract-utilities" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634855 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="186873a7-acc0-4e1b-9013-e906ad994b3b" containerName="pruner" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634862 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="186873a7-acc0-4e1b-9013-e906ad994b3b" containerName="pruner" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634871 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7e11da8-7a5b-49b5-a421-678c6c8fc10e" containerName="registry-server" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634879 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e11da8-7a5b-49b5-a421-678c6c8fc10e" containerName="registry-server" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634891 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7e11da8-7a5b-49b5-a421-678c6c8fc10e" containerName="extract-utilities" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634899 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e11da8-7a5b-49b5-a421-678c6c8fc10e" containerName="extract-utilities" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634908 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="02df487c-5002-42fc-940c-02d7df55f614" containerName="pruner" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634914 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="02df487c-5002-42fc-940c-02d7df55f614" containerName="pruner" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634929 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fdc926a9-b83b-4c7d-9558-98ab053066a1" containerName="extract-content" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634936 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc926a9-b83b-4c7d-9558-98ab053066a1" containerName="extract-content" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634954 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fdc926a9-b83b-4c7d-9558-98ab053066a1" containerName="extract-utilities" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634960 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc926a9-b83b-4c7d-9558-98ab053066a1" containerName="extract-utilities" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634968 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8588582f-a24f-452b-8770-a5d9533724c0" containerName="extract-utilities" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634975 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8588582f-a24f-452b-8770-a5d9533724c0" containerName="extract-utilities" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634991 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fdc926a9-b83b-4c7d-9558-98ab053066a1" containerName="registry-server" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.634998 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc926a9-b83b-4c7d-9558-98ab053066a1" containerName="registry-server" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.635008 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8588582f-a24f-452b-8770-a5d9533724c0" containerName="registry-server" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.635014 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8588582f-a24f-452b-8770-a5d9533724c0" containerName="registry-server" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.635023 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8588582f-a24f-452b-8770-a5d9533724c0" containerName="extract-content" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.635031 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8588582f-a24f-452b-8770-a5d9533724c0" containerName="extract-content" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.637135 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="02df487c-5002-42fc-940c-02d7df55f614" containerName="pruner" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.637172 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="186873a7-acc0-4e1b-9013-e906ad994b3b" containerName="pruner" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.637188 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="c7e11da8-7a5b-49b5-a421-678c6c8fc10e" containerName="registry-server" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.637200 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="4d00dd31-7ee8-4424-946d-c67a1cbe55b7" containerName="registry-server" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.637219 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="574501a5-bb4b-4c42-9046-e00bc9447f56" containerName="kube-multus-additional-cni-plugins" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.637235 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="fdc926a9-b83b-4c7d-9558-98ab053066a1" containerName="registry-server" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.637244 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="ab666d86-db2b-4489-a868-8d24159ea775" containerName="collect-profiles" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.637261 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="8588582f-a24f-452b-8770-a5d9533724c0" containerName="registry-server" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.656149 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.656736 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.664177 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.664779 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.805957 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a427b573-e7eb-41ae-ae1a-ea8ed019502f-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"a427b573-e7eb-41ae-ae1a-ea8ed019502f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.806406 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a427b573-e7eb-41ae-ae1a-ea8ed019502f-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"a427b573-e7eb-41ae-ae1a-ea8ed019502f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.908375 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a427b573-e7eb-41ae-ae1a-ea8ed019502f-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"a427b573-e7eb-41ae-ae1a-ea8ed019502f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.908450 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a427b573-e7eb-41ae-ae1a-ea8ed019502f-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"a427b573-e7eb-41ae-ae1a-ea8ed019502f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.908535 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a427b573-e7eb-41ae-ae1a-ea8ed019502f-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"a427b573-e7eb-41ae-ae1a-ea8ed019502f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.944810 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a427b573-e7eb-41ae-ae1a-ea8ed019502f-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"a427b573-e7eb-41ae-ae1a-ea8ed019502f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:32:00 crc kubenswrapper[5118]: I1208 19:32:00.984046 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:32:01 crc kubenswrapper[5118]: I1208 19:32:01.394087 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 19:32:01 crc kubenswrapper[5118]: I1208 19:32:01.714204 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"a427b573-e7eb-41ae-ae1a-ea8ed019502f","Type":"ContainerStarted","Data":"f6867662e0ae4d21d3f62c08c0f79bd42b4752e40987e2086a7877b581e5746e"} Dec 08 19:32:02 crc kubenswrapper[5118]: I1208 19:32:02.721433 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"a427b573-e7eb-41ae-ae1a-ea8ed019502f","Type":"ContainerStarted","Data":"1df399b289e5638888467a8f87665c44f4a3a2b7ccde330f2f0b18f7c42bcc38"} Dec 08 19:32:02 crc kubenswrapper[5118]: I1208 19:32:02.740660 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=2.740638642 podStartE2EDuration="2.740638642s" podCreationTimestamp="2025-12-08 19:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:32:02.734882517 +0000 UTC m=+175.027728014" watchObservedRunningTime="2025-12-08 19:32:02.740638642 +0000 UTC m=+175.033484109" Dec 08 19:32:03 crc kubenswrapper[5118]: I1208 19:32:03.729110 5118 generic.go:358] "Generic (PLEG): container finished" podID="a427b573-e7eb-41ae-ae1a-ea8ed019502f" containerID="1df399b289e5638888467a8f87665c44f4a3a2b7ccde330f2f0b18f7c42bcc38" exitCode=0 Dec 08 19:32:03 crc kubenswrapper[5118]: I1208 19:32:03.729187 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"a427b573-e7eb-41ae-ae1a-ea8ed019502f","Type":"ContainerDied","Data":"1df399b289e5638888467a8f87665c44f4a3a2b7ccde330f2f0b18f7c42bcc38"} Dec 08 19:32:05 crc kubenswrapper[5118]: I1208 19:32:05.003878 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:32:05 crc kubenswrapper[5118]: I1208 19:32:05.162279 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a427b573-e7eb-41ae-ae1a-ea8ed019502f-kube-api-access\") pod \"a427b573-e7eb-41ae-ae1a-ea8ed019502f\" (UID: \"a427b573-e7eb-41ae-ae1a-ea8ed019502f\") " Dec 08 19:32:05 crc kubenswrapper[5118]: I1208 19:32:05.162425 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a427b573-e7eb-41ae-ae1a-ea8ed019502f-kubelet-dir\") pod \"a427b573-e7eb-41ae-ae1a-ea8ed019502f\" (UID: \"a427b573-e7eb-41ae-ae1a-ea8ed019502f\") " Dec 08 19:32:05 crc kubenswrapper[5118]: I1208 19:32:05.162766 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a427b573-e7eb-41ae-ae1a-ea8ed019502f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a427b573-e7eb-41ae-ae1a-ea8ed019502f" (UID: "a427b573-e7eb-41ae-ae1a-ea8ed019502f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:32:05 crc kubenswrapper[5118]: I1208 19:32:05.172853 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a427b573-e7eb-41ae-ae1a-ea8ed019502f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a427b573-e7eb-41ae-ae1a-ea8ed019502f" (UID: "a427b573-e7eb-41ae-ae1a-ea8ed019502f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:32:05 crc kubenswrapper[5118]: I1208 19:32:05.264557 5118 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a427b573-e7eb-41ae-ae1a-ea8ed019502f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:05 crc kubenswrapper[5118]: I1208 19:32:05.264612 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a427b573-e7eb-41ae-ae1a-ea8ed019502f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:05 crc kubenswrapper[5118]: I1208 19:32:05.741955 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:32:05 crc kubenswrapper[5118]: I1208 19:32:05.742003 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"a427b573-e7eb-41ae-ae1a-ea8ed019502f","Type":"ContainerDied","Data":"f6867662e0ae4d21d3f62c08c0f79bd42b4752e40987e2086a7877b581e5746e"} Dec 08 19:32:05 crc kubenswrapper[5118]: I1208 19:32:05.742066 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6867662e0ae4d21d3f62c08c0f79bd42b4752e40987e2086a7877b581e5746e" Dec 08 19:32:07 crc kubenswrapper[5118]: I1208 19:32:07.622444 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 19:32:07 crc kubenswrapper[5118]: I1208 19:32:07.628477 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a427b573-e7eb-41ae-ae1a-ea8ed019502f" containerName="pruner" Dec 08 19:32:07 crc kubenswrapper[5118]: I1208 19:32:07.628590 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="a427b573-e7eb-41ae-ae1a-ea8ed019502f" containerName="pruner" Dec 08 19:32:07 crc kubenswrapper[5118]: I1208 19:32:07.628806 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="a427b573-e7eb-41ae-ae1a-ea8ed019502f" containerName="pruner" Dec 08 19:32:08 crc kubenswrapper[5118]: I1208 19:32:08.095402 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 19:32:08 crc kubenswrapper[5118]: I1208 19:32:08.099486 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:32:08 crc kubenswrapper[5118]: I1208 19:32:08.102598 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 19:32:08 crc kubenswrapper[5118]: I1208 19:32:08.102846 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:32:08 crc kubenswrapper[5118]: I1208 19:32:08.206750 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b7dde72-b320-47ca-af99-98eee388ad8d-kube-api-access\") pod \"installer-12-crc\" (UID: \"9b7dde72-b320-47ca-af99-98eee388ad8d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:32:08 crc kubenswrapper[5118]: I1208 19:32:08.207132 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b7dde72-b320-47ca-af99-98eee388ad8d-var-lock\") pod \"installer-12-crc\" (UID: \"9b7dde72-b320-47ca-af99-98eee388ad8d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:32:08 crc kubenswrapper[5118]: I1208 19:32:08.207240 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b7dde72-b320-47ca-af99-98eee388ad8d-kubelet-dir\") pod \"installer-12-crc\" (UID: \"9b7dde72-b320-47ca-af99-98eee388ad8d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:32:08 crc kubenswrapper[5118]: I1208 19:32:08.308783 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b7dde72-b320-47ca-af99-98eee388ad8d-var-lock\") pod \"installer-12-crc\" (UID: \"9b7dde72-b320-47ca-af99-98eee388ad8d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:32:08 crc kubenswrapper[5118]: I1208 19:32:08.308922 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b7dde72-b320-47ca-af99-98eee388ad8d-kubelet-dir\") pod \"installer-12-crc\" (UID: \"9b7dde72-b320-47ca-af99-98eee388ad8d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:32:08 crc kubenswrapper[5118]: I1208 19:32:08.308940 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b7dde72-b320-47ca-af99-98eee388ad8d-kube-api-access\") pod \"installer-12-crc\" (UID: \"9b7dde72-b320-47ca-af99-98eee388ad8d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:32:08 crc kubenswrapper[5118]: I1208 19:32:08.308955 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b7dde72-b320-47ca-af99-98eee388ad8d-var-lock\") pod \"installer-12-crc\" (UID: \"9b7dde72-b320-47ca-af99-98eee388ad8d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:32:08 crc kubenswrapper[5118]: I1208 19:32:08.309109 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b7dde72-b320-47ca-af99-98eee388ad8d-kubelet-dir\") pod \"installer-12-crc\" (UID: \"9b7dde72-b320-47ca-af99-98eee388ad8d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:32:08 crc kubenswrapper[5118]: I1208 19:32:08.328788 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b7dde72-b320-47ca-af99-98eee388ad8d-kube-api-access\") pod \"installer-12-crc\" (UID: \"9b7dde72-b320-47ca-af99-98eee388ad8d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:32:08 crc kubenswrapper[5118]: I1208 19:32:08.430709 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:32:08 crc kubenswrapper[5118]: I1208 19:32:08.815269 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 19:32:08 crc kubenswrapper[5118]: W1208 19:32:08.822100 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9b7dde72_b320_47ca_af99_98eee388ad8d.slice/crio-7df3f65afa710da1e12821d352d3ef5efaabc549755c3e05e20f88c2bc89f4eb WatchSource:0}: Error finding container 7df3f65afa710da1e12821d352d3ef5efaabc549755c3e05e20f88c2bc89f4eb: Status 404 returned error can't find the container with id 7df3f65afa710da1e12821d352d3ef5efaabc549755c3e05e20f88c2bc89f4eb Dec 08 19:32:09 crc kubenswrapper[5118]: I1208 19:32:09.780233 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"9b7dde72-b320-47ca-af99-98eee388ad8d","Type":"ContainerStarted","Data":"0abb409d2b428302edde31ff65b2ec92f818df8ee8dbdfd84cbb743d3454b0e7"} Dec 08 19:32:09 crc kubenswrapper[5118]: I1208 19:32:09.780710 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"9b7dde72-b320-47ca-af99-98eee388ad8d","Type":"ContainerStarted","Data":"7df3f65afa710da1e12821d352d3ef5efaabc549755c3e05e20f88c2bc89f4eb"} Dec 08 19:32:09 crc kubenswrapper[5118]: I1208 19:32:09.800775 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=2.800748763 podStartE2EDuration="2.800748763s" podCreationTimestamp="2025-12-08 19:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:32:09.798284507 +0000 UTC m=+182.091129964" watchObservedRunningTime="2025-12-08 19:32:09.800748763 +0000 UTC m=+182.093594220" Dec 08 19:32:16 crc kubenswrapper[5118]: I1208 19:32:16.425219 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-b68tb"] Dec 08 19:32:37 crc kubenswrapper[5118]: I1208 19:32:37.065501 5118 ???:1] "http: TLS handshake error from 192.168.126.11:57380: no serving certificate available for the kubelet" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.451677 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" podUID="00a48e62-fdf7-4d8f-846f-295c3cb4489e" containerName="oauth-openshift" containerID="cri-o://2a346b956025b3d1533f56c2adc5363e403321e0bc14e06df25dedd9c6e209cd" gracePeriod=15 Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.893091 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.932501 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72"] Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.933266 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="00a48e62-fdf7-4d8f-846f-295c3cb4489e" containerName="oauth-openshift" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.933283 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="00a48e62-fdf7-4d8f-846f-295c3cb4489e" containerName="oauth-openshift" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.933413 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="00a48e62-fdf7-4d8f-846f-295c3cb4489e" containerName="oauth-openshift" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.937569 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.954837 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72"] Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.981921 5118 generic.go:358] "Generic (PLEG): container finished" podID="00a48e62-fdf7-4d8f-846f-295c3cb4489e" containerID="2a346b956025b3d1533f56c2adc5363e403321e0bc14e06df25dedd9c6e209cd" exitCode=0 Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.982086 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/00a48e62-fdf7-4d8f-846f-295c3cb4489e-audit-dir\") pod \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.982130 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-cliconfig\") pod \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.982147 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" event={"ID":"00a48e62-fdf7-4d8f-846f-295c3cb4489e","Type":"ContainerDied","Data":"2a346b956025b3d1533f56c2adc5363e403321e0bc14e06df25dedd9c6e209cd"} Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.982169 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-audit-policies\") pod \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.982181 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" event={"ID":"00a48e62-fdf7-4d8f-846f-295c3cb4489e","Type":"ContainerDied","Data":"78b30e46fc8d446f215a850f0a4067c36e47bed84cf3250a8cd3688bce12ac09"} Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.982206 5118 scope.go:117] "RemoveContainer" containerID="2a346b956025b3d1533f56c2adc5363e403321e0bc14e06df25dedd9c6e209cd" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.982209 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-service-ca\") pod \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.982290 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-serving-cert\") pod \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.982377 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-b68tb" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.982392 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-trusted-ca-bundle\") pod \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.982420 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-template-error\") pod \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.982444 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-idp-0-file-data\") pod \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.982491 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-template-provider-selection\") pod \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.982520 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-template-login\") pod \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.982571 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-ocp-branding-template\") pod \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.982598 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-router-certs\") pod \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.982638 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-session\") pod \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.982661 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzx79\" (UniqueName: \"kubernetes.io/projected/00a48e62-fdf7-4d8f-846f-295c3cb4489e-kube-api-access-tzx79\") pod \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\" (UID: \"00a48e62-fdf7-4d8f-846f-295c3cb4489e\") " Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.983263 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00a48e62-fdf7-4d8f-846f-295c3cb4489e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "00a48e62-fdf7-4d8f-846f-295c3cb4489e" (UID: "00a48e62-fdf7-4d8f-846f-295c3cb4489e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.982807 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.984183 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.984268 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-session\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.984299 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.984323 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.984386 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-audit-policies\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.984420 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-audit-dir\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.984455 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.984491 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.984512 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-user-template-error\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.984588 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7h2p\" (UniqueName: \"kubernetes.io/projected/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-kube-api-access-f7h2p\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.984655 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-user-template-login\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.984678 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.984723 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.984814 5118 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/00a48e62-fdf7-4d8f-846f-295c3cb4489e-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.984316 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "00a48e62-fdf7-4d8f-846f-295c3cb4489e" (UID: "00a48e62-fdf7-4d8f-846f-295c3cb4489e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.984358 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "00a48e62-fdf7-4d8f-846f-295c3cb4489e" (UID: "00a48e62-fdf7-4d8f-846f-295c3cb4489e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.984461 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "00a48e62-fdf7-4d8f-846f-295c3cb4489e" (UID: "00a48e62-fdf7-4d8f-846f-295c3cb4489e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.986277 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "00a48e62-fdf7-4d8f-846f-295c3cb4489e" (UID: "00a48e62-fdf7-4d8f-846f-295c3cb4489e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.993261 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "00a48e62-fdf7-4d8f-846f-295c3cb4489e" (UID: "00a48e62-fdf7-4d8f-846f-295c3cb4489e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.993540 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "00a48e62-fdf7-4d8f-846f-295c3cb4489e" (UID: "00a48e62-fdf7-4d8f-846f-295c3cb4489e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.993872 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "00a48e62-fdf7-4d8f-846f-295c3cb4489e" (UID: "00a48e62-fdf7-4d8f-846f-295c3cb4489e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.994192 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "00a48e62-fdf7-4d8f-846f-295c3cb4489e" (UID: "00a48e62-fdf7-4d8f-846f-295c3cb4489e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.994259 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "00a48e62-fdf7-4d8f-846f-295c3cb4489e" (UID: "00a48e62-fdf7-4d8f-846f-295c3cb4489e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.996338 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00a48e62-fdf7-4d8f-846f-295c3cb4489e-kube-api-access-tzx79" (OuterVolumeSpecName: "kube-api-access-tzx79") pod "00a48e62-fdf7-4d8f-846f-295c3cb4489e" (UID: "00a48e62-fdf7-4d8f-846f-295c3cb4489e"). InnerVolumeSpecName "kube-api-access-tzx79". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.998051 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "00a48e62-fdf7-4d8f-846f-295c3cb4489e" (UID: "00a48e62-fdf7-4d8f-846f-295c3cb4489e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.998457 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "00a48e62-fdf7-4d8f-846f-295c3cb4489e" (UID: "00a48e62-fdf7-4d8f-846f-295c3cb4489e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:41 crc kubenswrapper[5118]: I1208 19:32:41.998928 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "00a48e62-fdf7-4d8f-846f-295c3cb4489e" (UID: "00a48e62-fdf7-4d8f-846f-295c3cb4489e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.034481 5118 scope.go:117] "RemoveContainer" containerID="2a346b956025b3d1533f56c2adc5363e403321e0bc14e06df25dedd9c6e209cd" Dec 08 19:32:42 crc kubenswrapper[5118]: E1208 19:32:42.034942 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a346b956025b3d1533f56c2adc5363e403321e0bc14e06df25dedd9c6e209cd\": container with ID starting with 2a346b956025b3d1533f56c2adc5363e403321e0bc14e06df25dedd9c6e209cd not found: ID does not exist" containerID="2a346b956025b3d1533f56c2adc5363e403321e0bc14e06df25dedd9c6e209cd" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.034995 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a346b956025b3d1533f56c2adc5363e403321e0bc14e06df25dedd9c6e209cd"} err="failed to get container status \"2a346b956025b3d1533f56c2adc5363e403321e0bc14e06df25dedd9c6e209cd\": rpc error: code = NotFound desc = could not find container \"2a346b956025b3d1533f56c2adc5363e403321e0bc14e06df25dedd9c6e209cd\": container with ID starting with 2a346b956025b3d1533f56c2adc5363e403321e0bc14e06df25dedd9c6e209cd not found: ID does not exist" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.085906 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.085959 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.086138 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-session\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.086779 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.086682 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.086828 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087078 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-audit-policies\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087129 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-audit-dir\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087174 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087263 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087289 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-user-template-error\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087331 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f7h2p\" (UniqueName: \"kubernetes.io/projected/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-kube-api-access-f7h2p\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087380 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-user-template-login\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087407 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087442 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087559 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087582 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087601 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087663 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tzx79\" (UniqueName: \"kubernetes.io/projected/00a48e62-fdf7-4d8f-846f-295c3cb4489e-kube-api-access-tzx79\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087680 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087710 5118 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087724 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087739 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087751 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087764 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087777 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087790 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087803 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/00a48e62-fdf7-4d8f-846f-295c3cb4489e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.088253 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-audit-dir\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.087335 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.088907 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.089630 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.089703 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-audit-policies\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.090293 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.090497 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.091461 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-session\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.091505 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.092621 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-user-template-error\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.094975 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-user-template-login\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.095995 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.103838 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7h2p\" (UniqueName: \"kubernetes.io/projected/c78f9ab6-af96-4b36-ba2c-3e24764f9eb1-kube-api-access-f7h2p\") pod \"oauth-openshift-7c9dc6bcd7-x6p72\" (UID: \"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1\") " pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.266055 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.305183 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-b68tb"] Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.309157 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-b68tb"] Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.459475 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72"] Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.989015 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" event={"ID":"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1","Type":"ContainerStarted","Data":"b045626118169b91f838eb5261a0552237012786209f835f7160cfbb48d42ebd"} Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.989071 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" event={"ID":"c78f9ab6-af96-4b36-ba2c-3e24764f9eb1","Type":"ContainerStarted","Data":"6327c3e6893f5f84c31a81d2fb5da3e6f1e1c1a3ba2ec88a896b142e676f69e6"} Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.989252 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:42 crc kubenswrapper[5118]: I1208 19:32:42.998548 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" Dec 08 19:32:43 crc kubenswrapper[5118]: I1208 19:32:43.012143 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7c9dc6bcd7-x6p72" podStartSLOduration=27.012120894 podStartE2EDuration="27.012120894s" podCreationTimestamp="2025-12-08 19:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:32:43.010095209 +0000 UTC m=+215.302940706" watchObservedRunningTime="2025-12-08 19:32:43.012120894 +0000 UTC m=+215.304966361" Dec 08 19:32:44 crc kubenswrapper[5118]: I1208 19:32:44.103837 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00a48e62-fdf7-4d8f-846f-295c3cb4489e" path="/var/lib/kubelet/pods/00a48e62-fdf7-4d8f-846f-295c3cb4489e/volumes" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.156164 5118 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.180270 5118 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.180315 5118 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.180406 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.180857 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.180877 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.180886 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.180891 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.180901 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.180907 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.180917 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.180923 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.180938 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.180943 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.180961 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.180967 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.180977 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.180982 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.180992 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.180999 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.181007 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.181012 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.181119 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.181128 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.181135 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.181174 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.181184 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.181191 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.181198 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.181356 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://77550c63414c65bb016bee1a9d97583c8688d5089b08f1e5d15d794eea50fea2" gracePeriod=15 Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.181533 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1bd91bbd0987e709fe88c3fde86f962659c94e69337c753ce7e644582a437544" gracePeriod=15 Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.181537 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.181577 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.181659 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://3b7f60e5988ac995422f8474669dd797d1a34954667133db76927b3e50a6ffd9" gracePeriod=15 Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.181784 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://4817842e9634a68433d70a28cd0b33c1b59095c64add0a62c2b6fd0ab631f1d7" gracePeriod=15 Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.181834 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.181850 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.181835 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://544909c828d1ef83c8368679ae033c6c9d29d544f7fd68216170667e54678dfa" gracePeriod=15 Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.185290 5118 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.262274 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: E1208 19:32:47.263863 5118 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.151:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.271571 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.271660 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.271704 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.271864 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.271894 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.271945 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.271966 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.271989 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.272021 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.272095 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.372828 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.372913 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.372938 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.372972 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.372994 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.373018 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.373047 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.373068 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.373174 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.373195 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.373225 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.373252 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.373297 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.373359 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.373393 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.373424 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.373452 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.373484 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.373882 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.373985 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: I1208 19:32:47.564944 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:47 crc kubenswrapper[5118]: E1208 19:32:47.598651 5118 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.151:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f54640a3dc89c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:32:47.597807772 +0000 UTC m=+219.890653259,LastTimestamp:2025-12-08 19:32:47.597807772 +0000 UTC m=+219.890653259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:32:48 crc kubenswrapper[5118]: I1208 19:32:48.029918 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 19:32:48 crc kubenswrapper[5118]: I1208 19:32:48.032002 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 19:32:48 crc kubenswrapper[5118]: I1208 19:32:48.032767 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="1bd91bbd0987e709fe88c3fde86f962659c94e69337c753ce7e644582a437544" exitCode=0 Dec 08 19:32:48 crc kubenswrapper[5118]: I1208 19:32:48.032799 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4817842e9634a68433d70a28cd0b33c1b59095c64add0a62c2b6fd0ab631f1d7" exitCode=0 Dec 08 19:32:48 crc kubenswrapper[5118]: I1208 19:32:48.032811 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3b7f60e5988ac995422f8474669dd797d1a34954667133db76927b3e50a6ffd9" exitCode=0 Dec 08 19:32:48 crc kubenswrapper[5118]: I1208 19:32:48.032823 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="544909c828d1ef83c8368679ae033c6c9d29d544f7fd68216170667e54678dfa" exitCode=2 Dec 08 19:32:48 crc kubenswrapper[5118]: I1208 19:32:48.032980 5118 scope.go:117] "RemoveContainer" containerID="cb4c1afa8991d0b84341d409764a25e3a5d637fbe883b782a86bf22292e14eef" Dec 08 19:32:48 crc kubenswrapper[5118]: I1208 19:32:48.034730 5118 generic.go:358] "Generic (PLEG): container finished" podID="9b7dde72-b320-47ca-af99-98eee388ad8d" containerID="0abb409d2b428302edde31ff65b2ec92f818df8ee8dbdfd84cbb743d3454b0e7" exitCode=0 Dec 08 19:32:48 crc kubenswrapper[5118]: I1208 19:32:48.034890 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"9b7dde72-b320-47ca-af99-98eee388ad8d","Type":"ContainerDied","Data":"0abb409d2b428302edde31ff65b2ec92f818df8ee8dbdfd84cbb743d3454b0e7"} Dec 08 19:32:48 crc kubenswrapper[5118]: I1208 19:32:48.035876 5118 status_manager.go:895] "Failed to get status for pod" podUID="9b7dde72-b320-47ca-af99-98eee388ad8d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:48 crc kubenswrapper[5118]: I1208 19:32:48.037429 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"aaa82d8d2d6554bd2c918b1c0d9188076a0a9795dad062e776985f3314a4ee46"} Dec 08 19:32:48 crc kubenswrapper[5118]: I1208 19:32:48.037472 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"75222a71db37e2817d01eca9020a6fdf8e1e404e319fed30cdd04afbe15f19b4"} Dec 08 19:32:48 crc kubenswrapper[5118]: I1208 19:32:48.037858 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:48 crc kubenswrapper[5118]: I1208 19:32:48.038503 5118 status_manager.go:895] "Failed to get status for pod" podUID="9b7dde72-b320-47ca-af99-98eee388ad8d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:48 crc kubenswrapper[5118]: E1208 19:32:48.038563 5118 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.151:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:48 crc kubenswrapper[5118]: I1208 19:32:48.101630 5118 status_manager.go:895] "Failed to get status for pod" podUID="9b7dde72-b320-47ca-af99-98eee388ad8d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.050641 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.383110 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.384750 5118 status_manager.go:895] "Failed to get status for pod" podUID="9b7dde72-b320-47ca-af99-98eee388ad8d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.503097 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b7dde72-b320-47ca-af99-98eee388ad8d-kube-api-access\") pod \"9b7dde72-b320-47ca-af99-98eee388ad8d\" (UID: \"9b7dde72-b320-47ca-af99-98eee388ad8d\") " Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.503183 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b7dde72-b320-47ca-af99-98eee388ad8d-var-lock\") pod \"9b7dde72-b320-47ca-af99-98eee388ad8d\" (UID: \"9b7dde72-b320-47ca-af99-98eee388ad8d\") " Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.503250 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b7dde72-b320-47ca-af99-98eee388ad8d-kubelet-dir\") pod \"9b7dde72-b320-47ca-af99-98eee388ad8d\" (UID: \"9b7dde72-b320-47ca-af99-98eee388ad8d\") " Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.503385 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b7dde72-b320-47ca-af99-98eee388ad8d-var-lock" (OuterVolumeSpecName: "var-lock") pod "9b7dde72-b320-47ca-af99-98eee388ad8d" (UID: "9b7dde72-b320-47ca-af99-98eee388ad8d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.503535 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b7dde72-b320-47ca-af99-98eee388ad8d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9b7dde72-b320-47ca-af99-98eee388ad8d" (UID: "9b7dde72-b320-47ca-af99-98eee388ad8d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.503637 5118 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b7dde72-b320-47ca-af99-98eee388ad8d-var-lock\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.508854 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b7dde72-b320-47ca-af99-98eee388ad8d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9b7dde72-b320-47ca-af99-98eee388ad8d" (UID: "9b7dde72-b320-47ca-af99-98eee388ad8d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.565095 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.566519 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.567418 5118 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.568141 5118 status_manager.go:895] "Failed to get status for pod" podUID="9b7dde72-b320-47ca-af99-98eee388ad8d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.605380 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b7dde72-b320-47ca-af99-98eee388ad8d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.605442 5118 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b7dde72-b320-47ca-af99-98eee388ad8d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.706993 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.707103 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.707220 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.707381 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.707458 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.707933 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.707989 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.708027 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.708037 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.710996 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.809292 5118 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.809327 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.809338 5118 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.809348 5118 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:49 crc kubenswrapper[5118]: I1208 19:32:49.809358 5118 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.060582 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"9b7dde72-b320-47ca-af99-98eee388ad8d","Type":"ContainerDied","Data":"7df3f65afa710da1e12821d352d3ef5efaabc549755c3e05e20f88c2bc89f4eb"} Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.060664 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7df3f65afa710da1e12821d352d3ef5efaabc549755c3e05e20f88c2bc89f4eb" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.060621 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.064063 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.065984 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="77550c63414c65bb016bee1a9d97583c8688d5089b08f1e5d15d794eea50fea2" exitCode=0 Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.066052 5118 scope.go:117] "RemoveContainer" containerID="1bd91bbd0987e709fe88c3fde86f962659c94e69337c753ce7e644582a437544" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.066263 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.087451 5118 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.087948 5118 status_manager.go:895] "Failed to get status for pod" podUID="9b7dde72-b320-47ca-af99-98eee388ad8d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.088359 5118 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.088584 5118 status_manager.go:895] "Failed to get status for pod" podUID="9b7dde72-b320-47ca-af99-98eee388ad8d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.089763 5118 scope.go:117] "RemoveContainer" containerID="4817842e9634a68433d70a28cd0b33c1b59095c64add0a62c2b6fd0ab631f1d7" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.108617 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.115516 5118 scope.go:117] "RemoveContainer" containerID="3b7f60e5988ac995422f8474669dd797d1a34954667133db76927b3e50a6ffd9" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.133821 5118 scope.go:117] "RemoveContainer" containerID="544909c828d1ef83c8368679ae033c6c9d29d544f7fd68216170667e54678dfa" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.152269 5118 scope.go:117] "RemoveContainer" containerID="77550c63414c65bb016bee1a9d97583c8688d5089b08f1e5d15d794eea50fea2" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.168807 5118 scope.go:117] "RemoveContainer" containerID="9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.236661 5118 scope.go:117] "RemoveContainer" containerID="1bd91bbd0987e709fe88c3fde86f962659c94e69337c753ce7e644582a437544" Dec 08 19:32:50 crc kubenswrapper[5118]: E1208 19:32:50.237401 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bd91bbd0987e709fe88c3fde86f962659c94e69337c753ce7e644582a437544\": container with ID starting with 1bd91bbd0987e709fe88c3fde86f962659c94e69337c753ce7e644582a437544 not found: ID does not exist" containerID="1bd91bbd0987e709fe88c3fde86f962659c94e69337c753ce7e644582a437544" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.237443 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bd91bbd0987e709fe88c3fde86f962659c94e69337c753ce7e644582a437544"} err="failed to get container status \"1bd91bbd0987e709fe88c3fde86f962659c94e69337c753ce7e644582a437544\": rpc error: code = NotFound desc = could not find container \"1bd91bbd0987e709fe88c3fde86f962659c94e69337c753ce7e644582a437544\": container with ID starting with 1bd91bbd0987e709fe88c3fde86f962659c94e69337c753ce7e644582a437544 not found: ID does not exist" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.237469 5118 scope.go:117] "RemoveContainer" containerID="4817842e9634a68433d70a28cd0b33c1b59095c64add0a62c2b6fd0ab631f1d7" Dec 08 19:32:50 crc kubenswrapper[5118]: E1208 19:32:50.237920 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4817842e9634a68433d70a28cd0b33c1b59095c64add0a62c2b6fd0ab631f1d7\": container with ID starting with 4817842e9634a68433d70a28cd0b33c1b59095c64add0a62c2b6fd0ab631f1d7 not found: ID does not exist" containerID="4817842e9634a68433d70a28cd0b33c1b59095c64add0a62c2b6fd0ab631f1d7" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.237970 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4817842e9634a68433d70a28cd0b33c1b59095c64add0a62c2b6fd0ab631f1d7"} err="failed to get container status \"4817842e9634a68433d70a28cd0b33c1b59095c64add0a62c2b6fd0ab631f1d7\": rpc error: code = NotFound desc = could not find container \"4817842e9634a68433d70a28cd0b33c1b59095c64add0a62c2b6fd0ab631f1d7\": container with ID starting with 4817842e9634a68433d70a28cd0b33c1b59095c64add0a62c2b6fd0ab631f1d7 not found: ID does not exist" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.238002 5118 scope.go:117] "RemoveContainer" containerID="3b7f60e5988ac995422f8474669dd797d1a34954667133db76927b3e50a6ffd9" Dec 08 19:32:50 crc kubenswrapper[5118]: E1208 19:32:50.238369 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b7f60e5988ac995422f8474669dd797d1a34954667133db76927b3e50a6ffd9\": container with ID starting with 3b7f60e5988ac995422f8474669dd797d1a34954667133db76927b3e50a6ffd9 not found: ID does not exist" containerID="3b7f60e5988ac995422f8474669dd797d1a34954667133db76927b3e50a6ffd9" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.238395 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b7f60e5988ac995422f8474669dd797d1a34954667133db76927b3e50a6ffd9"} err="failed to get container status \"3b7f60e5988ac995422f8474669dd797d1a34954667133db76927b3e50a6ffd9\": rpc error: code = NotFound desc = could not find container \"3b7f60e5988ac995422f8474669dd797d1a34954667133db76927b3e50a6ffd9\": container with ID starting with 3b7f60e5988ac995422f8474669dd797d1a34954667133db76927b3e50a6ffd9 not found: ID does not exist" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.238409 5118 scope.go:117] "RemoveContainer" containerID="544909c828d1ef83c8368679ae033c6c9d29d544f7fd68216170667e54678dfa" Dec 08 19:32:50 crc kubenswrapper[5118]: E1208 19:32:50.238795 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"544909c828d1ef83c8368679ae033c6c9d29d544f7fd68216170667e54678dfa\": container with ID starting with 544909c828d1ef83c8368679ae033c6c9d29d544f7fd68216170667e54678dfa not found: ID does not exist" containerID="544909c828d1ef83c8368679ae033c6c9d29d544f7fd68216170667e54678dfa" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.238834 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"544909c828d1ef83c8368679ae033c6c9d29d544f7fd68216170667e54678dfa"} err="failed to get container status \"544909c828d1ef83c8368679ae033c6c9d29d544f7fd68216170667e54678dfa\": rpc error: code = NotFound desc = could not find container \"544909c828d1ef83c8368679ae033c6c9d29d544f7fd68216170667e54678dfa\": container with ID starting with 544909c828d1ef83c8368679ae033c6c9d29d544f7fd68216170667e54678dfa not found: ID does not exist" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.238859 5118 scope.go:117] "RemoveContainer" containerID="77550c63414c65bb016bee1a9d97583c8688d5089b08f1e5d15d794eea50fea2" Dec 08 19:32:50 crc kubenswrapper[5118]: E1208 19:32:50.239480 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77550c63414c65bb016bee1a9d97583c8688d5089b08f1e5d15d794eea50fea2\": container with ID starting with 77550c63414c65bb016bee1a9d97583c8688d5089b08f1e5d15d794eea50fea2 not found: ID does not exist" containerID="77550c63414c65bb016bee1a9d97583c8688d5089b08f1e5d15d794eea50fea2" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.239509 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77550c63414c65bb016bee1a9d97583c8688d5089b08f1e5d15d794eea50fea2"} err="failed to get container status \"77550c63414c65bb016bee1a9d97583c8688d5089b08f1e5d15d794eea50fea2\": rpc error: code = NotFound desc = could not find container \"77550c63414c65bb016bee1a9d97583c8688d5089b08f1e5d15d794eea50fea2\": container with ID starting with 77550c63414c65bb016bee1a9d97583c8688d5089b08f1e5d15d794eea50fea2 not found: ID does not exist" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.239522 5118 scope.go:117] "RemoveContainer" containerID="9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa" Dec 08 19:32:50 crc kubenswrapper[5118]: E1208 19:32:50.239770 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa\": container with ID starting with 9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa not found: ID does not exist" containerID="9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa" Dec 08 19:32:50 crc kubenswrapper[5118]: I1208 19:32:50.239796 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa"} err="failed to get container status \"9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa\": rpc error: code = NotFound desc = could not find container \"9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa\": container with ID starting with 9186b189456adedbfd6a233038d86bedc2c388d4eda8f6327690668d9bad97fa not found: ID does not exist" Dec 08 19:32:54 crc kubenswrapper[5118]: E1208 19:32:54.223252 5118 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.151:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f54640a3dc89c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:32:47.597807772 +0000 UTC m=+219.890653259,LastTimestamp:2025-12-08 19:32:47.597807772 +0000 UTC m=+219.890653259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:32:55 crc kubenswrapper[5118]: E1208 19:32:55.600487 5118 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:55 crc kubenswrapper[5118]: E1208 19:32:55.601447 5118 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:55 crc kubenswrapper[5118]: E1208 19:32:55.601867 5118 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:55 crc kubenswrapper[5118]: E1208 19:32:55.602342 5118 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:55 crc kubenswrapper[5118]: E1208 19:32:55.603082 5118 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:55 crc kubenswrapper[5118]: I1208 19:32:55.603122 5118 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 08 19:32:55 crc kubenswrapper[5118]: E1208 19:32:55.603410 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="200ms" Dec 08 19:32:55 crc kubenswrapper[5118]: E1208 19:32:55.804215 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="400ms" Dec 08 19:32:56 crc kubenswrapper[5118]: E1208 19:32:56.205259 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="800ms" Dec 08 19:32:57 crc kubenswrapper[5118]: E1208 19:32:57.006783 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="1.6s" Dec 08 19:32:57 crc kubenswrapper[5118]: E1208 19:32:57.152922 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:32:57Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:32:57Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:32:57Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:32:57Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:57 crc kubenswrapper[5118]: E1208 19:32:57.153636 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:57 crc kubenswrapper[5118]: E1208 19:32:57.154249 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:57 crc kubenswrapper[5118]: E1208 19:32:57.154615 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:57 crc kubenswrapper[5118]: E1208 19:32:57.155211 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:57 crc kubenswrapper[5118]: E1208 19:32:57.155266 5118 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 19:32:58 crc kubenswrapper[5118]: I1208 19:32:58.099777 5118 status_manager.go:895] "Failed to get status for pod" podUID="9b7dde72-b320-47ca-af99-98eee388ad8d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:32:58 crc kubenswrapper[5118]: E1208 19:32:58.123945 5118 desired_state_of_world_populator.go:305] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.151:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66587d64c8-k49rf" volumeName="registry-storage" Dec 08 19:32:58 crc kubenswrapper[5118]: E1208 19:32:58.607822 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="3.2s" Dec 08 19:33:00 crc kubenswrapper[5118]: I1208 19:33:00.096373 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:33:00 crc kubenswrapper[5118]: I1208 19:33:00.099632 5118 status_manager.go:895] "Failed to get status for pod" podUID="9b7dde72-b320-47ca-af99-98eee388ad8d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:33:00 crc kubenswrapper[5118]: I1208 19:33:00.123527 5118 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cf2d8304-0772-47e0-8c2d-ed33f18c6dda" Dec 08 19:33:00 crc kubenswrapper[5118]: I1208 19:33:00.123890 5118 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cf2d8304-0772-47e0-8c2d-ed33f18c6dda" Dec 08 19:33:00 crc kubenswrapper[5118]: E1208 19:33:00.124446 5118 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:33:00 crc kubenswrapper[5118]: I1208 19:33:00.124871 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:33:01 crc kubenswrapper[5118]: I1208 19:33:01.148952 5118 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="08351844a5fe4809f70627d761331062c9219e85f28105f162a006e9798e5843" exitCode=0 Dec 08 19:33:01 crc kubenswrapper[5118]: I1208 19:33:01.149080 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"08351844a5fe4809f70627d761331062c9219e85f28105f162a006e9798e5843"} Dec 08 19:33:01 crc kubenswrapper[5118]: I1208 19:33:01.149160 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"5eb3cd3be4e7037ef3f7df84a7955bea3365a69dc6ddb8e24c0a628e5f363533"} Dec 08 19:33:01 crc kubenswrapper[5118]: I1208 19:33:01.149616 5118 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cf2d8304-0772-47e0-8c2d-ed33f18c6dda" Dec 08 19:33:01 crc kubenswrapper[5118]: I1208 19:33:01.149642 5118 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cf2d8304-0772-47e0-8c2d-ed33f18c6dda" Dec 08 19:33:01 crc kubenswrapper[5118]: E1208 19:33:01.150367 5118 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:33:01 crc kubenswrapper[5118]: I1208 19:33:01.150439 5118 status_manager.go:895] "Failed to get status for pod" podUID="9b7dde72-b320-47ca-af99-98eee388ad8d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Dec 08 19:33:02 crc kubenswrapper[5118]: I1208 19:33:02.169472 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"fa11a9f9c640e4d70188e2fc9d4dd3b19da1e855fc2d402ad9cf4731f8158724"} Dec 08 19:33:02 crc kubenswrapper[5118]: I1208 19:33:02.169926 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"1fb047a35f241b5a195964d3db4934ab99dfb6e574b98e1ef379bbde63c3d743"} Dec 08 19:33:02 crc kubenswrapper[5118]: I1208 19:33:02.169943 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d10289a377d7fdc3f5523e74a623cd39de27d3100bc3debfda5de243e72fba92"} Dec 08 19:33:02 crc kubenswrapper[5118]: I1208 19:33:02.172839 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:33:02 crc kubenswrapper[5118]: I1208 19:33:02.172878 5118 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="ed5e8a16f16345b28c7907efe04e4b3856cbade55bdb538fc7f3790a7e71d583" exitCode=1 Dec 08 19:33:02 crc kubenswrapper[5118]: I1208 19:33:02.173034 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"ed5e8a16f16345b28c7907efe04e4b3856cbade55bdb538fc7f3790a7e71d583"} Dec 08 19:33:02 crc kubenswrapper[5118]: I1208 19:33:02.173543 5118 scope.go:117] "RemoveContainer" containerID="ed5e8a16f16345b28c7907efe04e4b3856cbade55bdb538fc7f3790a7e71d583" Dec 08 19:33:02 crc kubenswrapper[5118]: I1208 19:33:02.238853 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:33:03 crc kubenswrapper[5118]: I1208 19:33:03.181783 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"c512a0cadee7f623c8a7257e315890f15ccf085eca91bef5235588d0117bdb64"} Dec 08 19:33:03 crc kubenswrapper[5118]: I1208 19:33:03.182507 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"f33d9251fb7fe954a5b36695159c5d5f243b23177cf2f5da2656132099504812"} Dec 08 19:33:03 crc kubenswrapper[5118]: I1208 19:33:03.182529 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:33:03 crc kubenswrapper[5118]: I1208 19:33:03.182055 5118 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cf2d8304-0772-47e0-8c2d-ed33f18c6dda" Dec 08 19:33:03 crc kubenswrapper[5118]: I1208 19:33:03.182593 5118 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cf2d8304-0772-47e0-8c2d-ed33f18c6dda" Dec 08 19:33:03 crc kubenswrapper[5118]: I1208 19:33:03.185143 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:33:03 crc kubenswrapper[5118]: I1208 19:33:03.185361 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"6e594ca61541bca3ffd1b0d815825b9e1c94a6070d071e87381dae2c1664e59b"} Dec 08 19:33:04 crc kubenswrapper[5118]: I1208 19:33:04.008410 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:33:04 crc kubenswrapper[5118]: I1208 19:33:04.008618 5118 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 08 19:33:04 crc kubenswrapper[5118]: I1208 19:33:04.008670 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 08 19:33:05 crc kubenswrapper[5118]: I1208 19:33:05.126102 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:33:05 crc kubenswrapper[5118]: I1208 19:33:05.126672 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:33:05 crc kubenswrapper[5118]: I1208 19:33:05.133512 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:33:08 crc kubenswrapper[5118]: I1208 19:33:08.190771 5118 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:33:08 crc kubenswrapper[5118]: I1208 19:33:08.191123 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:33:08 crc kubenswrapper[5118]: I1208 19:33:08.217571 5118 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cf2d8304-0772-47e0-8c2d-ed33f18c6dda" Dec 08 19:33:08 crc kubenswrapper[5118]: I1208 19:33:08.217600 5118 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cf2d8304-0772-47e0-8c2d-ed33f18c6dda" Dec 08 19:33:08 crc kubenswrapper[5118]: I1208 19:33:08.222952 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:33:08 crc kubenswrapper[5118]: I1208 19:33:08.225037 5118 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="630365f1-6c9a-43ec-a454-2ae6f8a97785" Dec 08 19:33:08 crc kubenswrapper[5118]: I1208 19:33:08.873778 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:33:09 crc kubenswrapper[5118]: I1208 19:33:09.223607 5118 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cf2d8304-0772-47e0-8c2d-ed33f18c6dda" Dec 08 19:33:09 crc kubenswrapper[5118]: I1208 19:33:09.223635 5118 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cf2d8304-0772-47e0-8c2d-ed33f18c6dda" Dec 08 19:33:09 crc kubenswrapper[5118]: I1208 19:33:09.468005 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:33:09 crc kubenswrapper[5118]: I1208 19:33:09.468118 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:33:14 crc kubenswrapper[5118]: I1208 19:33:14.008088 5118 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 08 19:33:14 crc kubenswrapper[5118]: I1208 19:33:14.009062 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 08 19:33:18 crc kubenswrapper[5118]: I1208 19:33:18.128598 5118 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="630365f1-6c9a-43ec-a454-2ae6f8a97785" Dec 08 19:33:18 crc kubenswrapper[5118]: I1208 19:33:18.452642 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 08 19:33:18 crc kubenswrapper[5118]: I1208 19:33:18.516320 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:18 crc kubenswrapper[5118]: I1208 19:33:18.647771 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:19 crc kubenswrapper[5118]: I1208 19:33:19.051990 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 08 19:33:19 crc kubenswrapper[5118]: I1208 19:33:19.373919 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:19 crc kubenswrapper[5118]: I1208 19:33:19.834844 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 08 19:33:19 crc kubenswrapper[5118]: I1208 19:33:19.963242 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:20 crc kubenswrapper[5118]: I1208 19:33:20.344507 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 08 19:33:20 crc kubenswrapper[5118]: I1208 19:33:20.404664 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 08 19:33:20 crc kubenswrapper[5118]: I1208 19:33:20.511884 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 08 19:33:20 crc kubenswrapper[5118]: I1208 19:33:20.737539 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 08 19:33:20 crc kubenswrapper[5118]: I1208 19:33:20.746245 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:20 crc kubenswrapper[5118]: I1208 19:33:20.831292 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 08 19:33:20 crc kubenswrapper[5118]: I1208 19:33:20.953633 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.010917 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.019229 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.036357 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.130405 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.215934 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.302036 5118 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.309939 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.310027 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.314872 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.318868 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.331195 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.331175201 podStartE2EDuration="13.331175201s" podCreationTimestamp="2025-12-08 19:33:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:33:21.32999737 +0000 UTC m=+253.622842827" watchObservedRunningTime="2025-12-08 19:33:21.331175201 +0000 UTC m=+253.624020678" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.465879 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.465971 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.480019 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.499509 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.529600 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.532006 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.568848 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.572680 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.619637 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.704390 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.734176 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.741101 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.782391 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.802441 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.834664 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.978979 5118 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 19:33:21 crc kubenswrapper[5118]: I1208 19:33:21.994380 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 08 19:33:22 crc kubenswrapper[5118]: I1208 19:33:22.103974 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 08 19:33:22 crc kubenswrapper[5118]: I1208 19:33:22.318181 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:22 crc kubenswrapper[5118]: I1208 19:33:22.392181 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 19:33:22 crc kubenswrapper[5118]: I1208 19:33:22.397148 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 08 19:33:22 crc kubenswrapper[5118]: I1208 19:33:22.403167 5118 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:33:22 crc kubenswrapper[5118]: I1208 19:33:22.432357 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:22 crc kubenswrapper[5118]: I1208 19:33:22.455822 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 08 19:33:22 crc kubenswrapper[5118]: I1208 19:33:22.480668 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 08 19:33:22 crc kubenswrapper[5118]: I1208 19:33:22.610137 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:33:22 crc kubenswrapper[5118]: I1208 19:33:22.656887 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 08 19:33:22 crc kubenswrapper[5118]: I1208 19:33:22.667410 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 08 19:33:22 crc kubenswrapper[5118]: I1208 19:33:22.699313 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 08 19:33:22 crc kubenswrapper[5118]: I1208 19:33:22.745659 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 08 19:33:22 crc kubenswrapper[5118]: I1208 19:33:22.754919 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 08 19:33:22 crc kubenswrapper[5118]: I1208 19:33:22.878516 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 08 19:33:22 crc kubenswrapper[5118]: I1208 19:33:22.890562 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 08 19:33:22 crc kubenswrapper[5118]: I1208 19:33:22.994134 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 08 19:33:23 crc kubenswrapper[5118]: I1208 19:33:23.059902 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 19:33:23 crc kubenswrapper[5118]: I1208 19:33:23.153157 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 08 19:33:23 crc kubenswrapper[5118]: I1208 19:33:23.161156 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 19:33:23 crc kubenswrapper[5118]: I1208 19:33:23.227843 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 08 19:33:23 crc kubenswrapper[5118]: I1208 19:33:23.235301 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 08 19:33:23 crc kubenswrapper[5118]: I1208 19:33:23.259199 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 08 19:33:23 crc kubenswrapper[5118]: I1208 19:33:23.275388 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 08 19:33:23 crc kubenswrapper[5118]: I1208 19:33:23.369144 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 08 19:33:23 crc kubenswrapper[5118]: I1208 19:33:23.470963 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 08 19:33:23 crc kubenswrapper[5118]: I1208 19:33:23.471524 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 08 19:33:23 crc kubenswrapper[5118]: I1208 19:33:23.635058 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 08 19:33:23 crc kubenswrapper[5118]: I1208 19:33:23.637137 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:23 crc kubenswrapper[5118]: I1208 19:33:23.637645 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 08 19:33:23 crc kubenswrapper[5118]: I1208 19:33:23.728827 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 08 19:33:23 crc kubenswrapper[5118]: I1208 19:33:23.828520 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 08 19:33:23 crc kubenswrapper[5118]: I1208 19:33:23.934764 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.009025 5118 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.009149 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.009295 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.010524 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"6e594ca61541bca3ffd1b0d815825b9e1c94a6070d071e87381dae2c1664e59b"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.010728 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" containerID="cri-o://6e594ca61541bca3ffd1b0d815825b9e1c94a6070d071e87381dae2c1664e59b" gracePeriod=30 Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.032251 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.146951 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.168807 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.214971 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.404475 5118 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.473358 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.476954 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.494139 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.546496 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.623703 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.630605 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.633385 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.657359 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.732282 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.767041 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.771725 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.813265 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.813538 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.825584 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.863297 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.871974 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.881629 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 08 19:33:24 crc kubenswrapper[5118]: I1208 19:33:24.964446 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 08 19:33:25 crc kubenswrapper[5118]: I1208 19:33:25.013257 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 19:33:25 crc kubenswrapper[5118]: I1208 19:33:25.018790 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 08 19:33:25 crc kubenswrapper[5118]: I1208 19:33:25.024253 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 08 19:33:25 crc kubenswrapper[5118]: I1208 19:33:25.057063 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 08 19:33:25 crc kubenswrapper[5118]: I1208 19:33:25.202048 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 19:33:25 crc kubenswrapper[5118]: I1208 19:33:25.222141 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 08 19:33:25 crc kubenswrapper[5118]: I1208 19:33:25.294332 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 08 19:33:25 crc kubenswrapper[5118]: I1208 19:33:25.475455 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 08 19:33:25 crc kubenswrapper[5118]: I1208 19:33:25.646198 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 08 19:33:25 crc kubenswrapper[5118]: I1208 19:33:25.688541 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 08 19:33:25 crc kubenswrapper[5118]: I1208 19:33:25.698510 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 08 19:33:25 crc kubenswrapper[5118]: I1208 19:33:25.751772 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 08 19:33:25 crc kubenswrapper[5118]: I1208 19:33:25.784443 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 08 19:33:25 crc kubenswrapper[5118]: I1208 19:33:25.797881 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 08 19:33:25 crc kubenswrapper[5118]: I1208 19:33:25.934523 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 08 19:33:25 crc kubenswrapper[5118]: I1208 19:33:25.963739 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 08 19:33:26 crc kubenswrapper[5118]: I1208 19:33:26.038872 5118 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:33:26 crc kubenswrapper[5118]: I1208 19:33:26.118075 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 19:33:26 crc kubenswrapper[5118]: I1208 19:33:26.139280 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 08 19:33:26 crc kubenswrapper[5118]: I1208 19:33:26.206589 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 08 19:33:26 crc kubenswrapper[5118]: I1208 19:33:26.260061 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 08 19:33:26 crc kubenswrapper[5118]: I1208 19:33:26.282084 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 08 19:33:26 crc kubenswrapper[5118]: I1208 19:33:26.359889 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 08 19:33:26 crc kubenswrapper[5118]: I1208 19:33:26.427189 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 08 19:33:26 crc kubenswrapper[5118]: I1208 19:33:26.428606 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 08 19:33:26 crc kubenswrapper[5118]: I1208 19:33:26.554432 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 08 19:33:26 crc kubenswrapper[5118]: I1208 19:33:26.697208 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 19:33:26 crc kubenswrapper[5118]: I1208 19:33:26.757851 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 08 19:33:26 crc kubenswrapper[5118]: I1208 19:33:26.762145 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 08 19:33:26 crc kubenswrapper[5118]: I1208 19:33:26.776638 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 08 19:33:26 crc kubenswrapper[5118]: I1208 19:33:26.974186 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 08 19:33:27 crc kubenswrapper[5118]: I1208 19:33:27.117128 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 08 19:33:27 crc kubenswrapper[5118]: I1208 19:33:27.143527 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 08 19:33:27 crc kubenswrapper[5118]: I1208 19:33:27.146035 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 08 19:33:27 crc kubenswrapper[5118]: I1208 19:33:27.169458 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 08 19:33:27 crc kubenswrapper[5118]: I1208 19:33:27.183281 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 08 19:33:27 crc kubenswrapper[5118]: I1208 19:33:27.218880 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 08 19:33:27 crc kubenswrapper[5118]: I1208 19:33:27.272114 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 08 19:33:27 crc kubenswrapper[5118]: I1208 19:33:27.280666 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:27 crc kubenswrapper[5118]: I1208 19:33:27.495969 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 08 19:33:27 crc kubenswrapper[5118]: I1208 19:33:27.510660 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 19:33:27 crc kubenswrapper[5118]: I1208 19:33:27.672623 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 08 19:33:27 crc kubenswrapper[5118]: I1208 19:33:27.704437 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 08 19:33:27 crc kubenswrapper[5118]: I1208 19:33:27.850446 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:27 crc kubenswrapper[5118]: I1208 19:33:27.865684 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 08 19:33:27 crc kubenswrapper[5118]: I1208 19:33:27.897761 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 08 19:33:27 crc kubenswrapper[5118]: I1208 19:33:27.924972 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 19:33:27 crc kubenswrapper[5118]: I1208 19:33:27.948527 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 08 19:33:27 crc kubenswrapper[5118]: I1208 19:33:27.960283 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 08 19:33:27 crc kubenswrapper[5118]: I1208 19:33:27.967646 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 08 19:33:28 crc kubenswrapper[5118]: I1208 19:33:28.174629 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:33:28 crc kubenswrapper[5118]: I1208 19:33:28.179146 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:28 crc kubenswrapper[5118]: I1208 19:33:28.344339 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 08 19:33:28 crc kubenswrapper[5118]: I1208 19:33:28.389449 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 19:33:28 crc kubenswrapper[5118]: I1208 19:33:28.394182 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 08 19:33:28 crc kubenswrapper[5118]: I1208 19:33:28.460619 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:28 crc kubenswrapper[5118]: I1208 19:33:28.513447 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 08 19:33:28 crc kubenswrapper[5118]: I1208 19:33:28.523179 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 08 19:33:28 crc kubenswrapper[5118]: I1208 19:33:28.638886 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 08 19:33:28 crc kubenswrapper[5118]: I1208 19:33:28.658042 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 08 19:33:28 crc kubenswrapper[5118]: I1208 19:33:28.660993 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 08 19:33:28 crc kubenswrapper[5118]: I1208 19:33:28.712579 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 08 19:33:28 crc kubenswrapper[5118]: I1208 19:33:28.829727 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 08 19:33:28 crc kubenswrapper[5118]: I1208 19:33:28.869161 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 08 19:33:28 crc kubenswrapper[5118]: I1208 19:33:28.903869 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 08 19:33:28 crc kubenswrapper[5118]: I1208 19:33:28.961235 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.020020 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.048906 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.049306 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.109207 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.124994 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.222062 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.274466 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.298610 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.330390 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.346226 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.353518 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.362404 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.393233 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.421615 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.426942 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.455927 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.467368 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.475646 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.477172 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.574859 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.618031 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.664386 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.718934 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.785392 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.877409 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.922942 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.933806 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.948540 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 08 19:33:29 crc kubenswrapper[5118]: I1208 19:33:29.988738 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 08 19:33:30 crc kubenswrapper[5118]: I1208 19:33:30.049776 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 08 19:33:30 crc kubenswrapper[5118]: I1208 19:33:30.165462 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 08 19:33:30 crc kubenswrapper[5118]: I1208 19:33:30.267627 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 08 19:33:30 crc kubenswrapper[5118]: I1208 19:33:30.298570 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 19:33:30 crc kubenswrapper[5118]: I1208 19:33:30.406024 5118 ???:1] "http: TLS handshake error from 192.168.126.11:56956: no serving certificate available for the kubelet" Dec 08 19:33:30 crc kubenswrapper[5118]: I1208 19:33:30.429400 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 08 19:33:30 crc kubenswrapper[5118]: I1208 19:33:30.453993 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 08 19:33:30 crc kubenswrapper[5118]: I1208 19:33:30.483461 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 08 19:33:30 crc kubenswrapper[5118]: I1208 19:33:30.554022 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:30 crc kubenswrapper[5118]: I1208 19:33:30.573716 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 08 19:33:30 crc kubenswrapper[5118]: I1208 19:33:30.596958 5118 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 19:33:30 crc kubenswrapper[5118]: I1208 19:33:30.597262 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://aaa82d8d2d6554bd2c918b1c0d9188076a0a9795dad062e776985f3314a4ee46" gracePeriod=5 Dec 08 19:33:30 crc kubenswrapper[5118]: I1208 19:33:30.681959 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 08 19:33:30 crc kubenswrapper[5118]: I1208 19:33:30.760279 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 08 19:33:30 crc kubenswrapper[5118]: I1208 19:33:30.838146 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 08 19:33:30 crc kubenswrapper[5118]: I1208 19:33:30.883220 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 08 19:33:31 crc kubenswrapper[5118]: I1208 19:33:31.064306 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 08 19:33:31 crc kubenswrapper[5118]: I1208 19:33:31.105399 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 08 19:33:31 crc kubenswrapper[5118]: I1208 19:33:31.139234 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:31 crc kubenswrapper[5118]: I1208 19:33:31.170660 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 19:33:31 crc kubenswrapper[5118]: I1208 19:33:31.174139 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:33:31 crc kubenswrapper[5118]: I1208 19:33:31.235440 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 08 19:33:31 crc kubenswrapper[5118]: I1208 19:33:31.291500 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 08 19:33:31 crc kubenswrapper[5118]: I1208 19:33:31.299906 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:31 crc kubenswrapper[5118]: I1208 19:33:31.453082 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 08 19:33:31 crc kubenswrapper[5118]: I1208 19:33:31.500149 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 08 19:33:31 crc kubenswrapper[5118]: I1208 19:33:31.640345 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 08 19:33:31 crc kubenswrapper[5118]: I1208 19:33:31.665351 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 08 19:33:31 crc kubenswrapper[5118]: I1208 19:33:31.905200 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 08 19:33:31 crc kubenswrapper[5118]: I1208 19:33:31.923995 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 08 19:33:31 crc kubenswrapper[5118]: I1208 19:33:31.969973 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:33:31 crc kubenswrapper[5118]: I1208 19:33:31.982955 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 08 19:33:32 crc kubenswrapper[5118]: I1208 19:33:32.132442 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 08 19:33:32 crc kubenswrapper[5118]: I1208 19:33:32.171309 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 08 19:33:32 crc kubenswrapper[5118]: I1208 19:33:32.271227 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:32 crc kubenswrapper[5118]: I1208 19:33:32.311728 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 08 19:33:32 crc kubenswrapper[5118]: I1208 19:33:32.313317 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 08 19:33:32 crc kubenswrapper[5118]: I1208 19:33:32.400924 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 08 19:33:32 crc kubenswrapper[5118]: I1208 19:33:32.720259 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 08 19:33:32 crc kubenswrapper[5118]: I1208 19:33:32.755549 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 08 19:33:32 crc kubenswrapper[5118]: I1208 19:33:32.857077 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 08 19:33:32 crc kubenswrapper[5118]: I1208 19:33:32.889925 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 08 19:33:32 crc kubenswrapper[5118]: I1208 19:33:32.982915 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 08 19:33:32 crc kubenswrapper[5118]: I1208 19:33:32.988438 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:33 crc kubenswrapper[5118]: I1208 19:33:33.057076 5118 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:33:33 crc kubenswrapper[5118]: I1208 19:33:33.178664 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 08 19:33:33 crc kubenswrapper[5118]: I1208 19:33:33.195712 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 08 19:33:33 crc kubenswrapper[5118]: I1208 19:33:33.292409 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:33:33 crc kubenswrapper[5118]: I1208 19:33:33.341545 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 08 19:33:33 crc kubenswrapper[5118]: I1208 19:33:33.364986 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 08 19:33:33 crc kubenswrapper[5118]: I1208 19:33:33.369794 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 08 19:33:33 crc kubenswrapper[5118]: I1208 19:33:33.430807 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:33 crc kubenswrapper[5118]: I1208 19:33:33.531603 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 19:33:33 crc kubenswrapper[5118]: I1208 19:33:33.654716 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 08 19:33:33 crc kubenswrapper[5118]: I1208 19:33:33.694141 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 19:33:33 crc kubenswrapper[5118]: I1208 19:33:33.780637 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 08 19:33:33 crc kubenswrapper[5118]: I1208 19:33:33.825351 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 08 19:33:33 crc kubenswrapper[5118]: I1208 19:33:33.968453 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 08 19:33:34 crc kubenswrapper[5118]: I1208 19:33:34.131013 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 08 19:33:34 crc kubenswrapper[5118]: I1208 19:33:34.224480 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 19:33:34 crc kubenswrapper[5118]: I1208 19:33:34.366848 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 19:33:34 crc kubenswrapper[5118]: I1208 19:33:34.403673 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 08 19:33:34 crc kubenswrapper[5118]: I1208 19:33:34.443721 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 08 19:33:34 crc kubenswrapper[5118]: I1208 19:33:34.812246 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 08 19:33:35 crc kubenswrapper[5118]: I1208 19:33:35.131308 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 08 19:33:35 crc kubenswrapper[5118]: I1208 19:33:35.176289 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 08 19:33:35 crc kubenswrapper[5118]: I1208 19:33:35.454386 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.164924 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.165007 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.166604 5118 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.264455 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.264502 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.264531 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.264593 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.264648 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.264932 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.264963 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.264979 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.264995 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.279277 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.365927 5118 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.365980 5118 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.365998 5118 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.366015 5118 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.366032 5118 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.394351 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.394397 5118 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="aaa82d8d2d6554bd2c918b1c0d9188076a0a9795dad062e776985f3314a4ee46" exitCode=137 Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.394515 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.394534 5118 scope.go:117] "RemoveContainer" containerID="aaa82d8d2d6554bd2c918b1c0d9188076a0a9795dad062e776985f3314a4ee46" Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.397478 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.425001 5118 scope.go:117] "RemoveContainer" containerID="aaa82d8d2d6554bd2c918b1c0d9188076a0a9795dad062e776985f3314a4ee46" Dec 08 19:33:36 crc kubenswrapper[5118]: E1208 19:33:36.425568 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaa82d8d2d6554bd2c918b1c0d9188076a0a9795dad062e776985f3314a4ee46\": container with ID starting with aaa82d8d2d6554bd2c918b1c0d9188076a0a9795dad062e776985f3314a4ee46 not found: ID does not exist" containerID="aaa82d8d2d6554bd2c918b1c0d9188076a0a9795dad062e776985f3314a4ee46" Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.425623 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaa82d8d2d6554bd2c918b1c0d9188076a0a9795dad062e776985f3314a4ee46"} err="failed to get container status \"aaa82d8d2d6554bd2c918b1c0d9188076a0a9795dad062e776985f3314a4ee46\": rpc error: code = NotFound desc = could not find container \"aaa82d8d2d6554bd2c918b1c0d9188076a0a9795dad062e776985f3314a4ee46\": container with ID starting with aaa82d8d2d6554bd2c918b1c0d9188076a0a9795dad062e776985f3314a4ee46 not found: ID does not exist" Dec 08 19:33:36 crc kubenswrapper[5118]: I1208 19:33:36.426986 5118 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 19:33:38 crc kubenswrapper[5118]: I1208 19:33:38.101609 5118 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 19:33:38 crc kubenswrapper[5118]: I1208 19:33:38.105836 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 08 19:33:39 crc kubenswrapper[5118]: I1208 19:33:39.467890 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:33:39 crc kubenswrapper[5118]: I1208 19:33:39.468937 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:33:49 crc kubenswrapper[5118]: I1208 19:33:49.825852 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7px9v"] Dec 08 19:33:49 crc kubenswrapper[5118]: I1208 19:33:49.826803 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7px9v" podUID="6d799616-15c0-4e4f-8cbb-5f33d9f607ef" containerName="registry-server" containerID="cri-o://ef174d642a498f3fd4f351f5d82e46cf8e375768e0ee1465453b73b288f6514a" gracePeriod=30 Dec 08 19:33:49 crc kubenswrapper[5118]: I1208 19:33:49.842273 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cs27m"] Dec 08 19:33:49 crc kubenswrapper[5118]: I1208 19:33:49.842792 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cs27m" podUID="9801ce4f-e9bf-4c09-a624-81675bbda6fa" containerName="registry-server" containerID="cri-o://9dd6827ac05bd43f580f9f92a65b548d17fdbdf9b71940625fa81d5bfe12f109" gracePeriod=30 Dec 08 19:33:49 crc kubenswrapper[5118]: I1208 19:33:49.851764 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-8htc9"] Dec 08 19:33:49 crc kubenswrapper[5118]: I1208 19:33:49.852392 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" podUID="943f723e-defa-4cda-914e-964cdf480831" containerName="marketplace-operator" containerID="cri-o://394573cbf46d7ceae941f13bea721343cdfbedf5b15ddddcc4c68e0017a9dcf6" gracePeriod=30 Dec 08 19:33:49 crc kubenswrapper[5118]: I1208 19:33:49.870044 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8rpxq"] Dec 08 19:33:49 crc kubenswrapper[5118]: I1208 19:33:49.870305 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8rpxq" podUID="70414740-2872-4ebd-b3b5-ded149c0f019" containerName="registry-server" containerID="cri-o://956962fd5a16ec4a7380033ef397e59d0176c52c75a9967ee9e5af106effc88e" gracePeriod=30 Dec 08 19:33:49 crc kubenswrapper[5118]: I1208 19:33:49.874101 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mlt4z"] Dec 08 19:33:49 crc kubenswrapper[5118]: I1208 19:33:49.874423 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mlt4z" podUID="14b81eee-396d-4e4e-a48c-87183aa677a0" containerName="registry-server" containerID="cri-o://348a2fc5fa3c4684add4d9aabae8590595a98431a23aec1ce43541b26493abd4" gracePeriod=30 Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.221399 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7px9v" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.223078 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cs27m" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.266798 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8rpxq" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.274512 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mlt4z" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.277436 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.320409 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d799616-15c0-4e4f-8cbb-5f33d9f607ef-utilities\") pod \"6d799616-15c0-4e4f-8cbb-5f33d9f607ef\" (UID: \"6d799616-15c0-4e4f-8cbb-5f33d9f607ef\") " Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.320626 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d799616-15c0-4e4f-8cbb-5f33d9f607ef-catalog-content\") pod \"6d799616-15c0-4e4f-8cbb-5f33d9f607ef\" (UID: \"6d799616-15c0-4e4f-8cbb-5f33d9f607ef\") " Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.321546 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d799616-15c0-4e4f-8cbb-5f33d9f607ef-utilities" (OuterVolumeSpecName: "utilities") pod "6d799616-15c0-4e4f-8cbb-5f33d9f607ef" (UID: "6d799616-15c0-4e4f-8cbb-5f33d9f607ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.325827 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b5gm\" (UniqueName: \"kubernetes.io/projected/6d799616-15c0-4e4f-8cbb-5f33d9f607ef-kube-api-access-5b5gm\") pod \"6d799616-15c0-4e4f-8cbb-5f33d9f607ef\" (UID: \"6d799616-15c0-4e4f-8cbb-5f33d9f607ef\") " Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.326141 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d799616-15c0-4e4f-8cbb-5f33d9f607ef-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.332011 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d799616-15c0-4e4f-8cbb-5f33d9f607ef-kube-api-access-5b5gm" (OuterVolumeSpecName: "kube-api-access-5b5gm") pod "6d799616-15c0-4e4f-8cbb-5f33d9f607ef" (UID: "6d799616-15c0-4e4f-8cbb-5f33d9f607ef"). InnerVolumeSpecName "kube-api-access-5b5gm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.351417 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d799616-15c0-4e4f-8cbb-5f33d9f607ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d799616-15c0-4e4f-8cbb-5f33d9f607ef" (UID: "6d799616-15c0-4e4f-8cbb-5f33d9f607ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.426854 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70414740-2872-4ebd-b3b5-ded149c0f019-utilities\") pod \"70414740-2872-4ebd-b3b5-ded149c0f019\" (UID: \"70414740-2872-4ebd-b3b5-ded149c0f019\") " Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.426918 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9801ce4f-e9bf-4c09-a624-81675bbda6fa-catalog-content\") pod \"9801ce4f-e9bf-4c09-a624-81675bbda6fa\" (UID: \"9801ce4f-e9bf-4c09-a624-81675bbda6fa\") " Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.428078 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70414740-2872-4ebd-b3b5-ded149c0f019-utilities" (OuterVolumeSpecName: "utilities") pod "70414740-2872-4ebd-b3b5-ded149c0f019" (UID: "70414740-2872-4ebd-b3b5-ded149c0f019"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.428797 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjmkp\" (UniqueName: \"kubernetes.io/projected/14b81eee-396d-4e4e-a48c-87183aa677a0-kube-api-access-gjmkp\") pod \"14b81eee-396d-4e4e-a48c-87183aa677a0\" (UID: \"14b81eee-396d-4e4e-a48c-87183aa677a0\") " Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.428836 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14b81eee-396d-4e4e-a48c-87183aa677a0-catalog-content\") pod \"14b81eee-396d-4e4e-a48c-87183aa677a0\" (UID: \"14b81eee-396d-4e4e-a48c-87183aa677a0\") " Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.428860 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70414740-2872-4ebd-b3b5-ded149c0f019-catalog-content\") pod \"70414740-2872-4ebd-b3b5-ded149c0f019\" (UID: \"70414740-2872-4ebd-b3b5-ded149c0f019\") " Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.428879 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/943f723e-defa-4cda-914e-964cdf480831-marketplace-operator-metrics\") pod \"943f723e-defa-4cda-914e-964cdf480831\" (UID: \"943f723e-defa-4cda-914e-964cdf480831\") " Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.429304 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/943f723e-defa-4cda-914e-964cdf480831-marketplace-trusted-ca\") pod \"943f723e-defa-4cda-914e-964cdf480831\" (UID: \"943f723e-defa-4cda-914e-964cdf480831\") " Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.429351 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp46w\" (UniqueName: \"kubernetes.io/projected/9801ce4f-e9bf-4c09-a624-81675bbda6fa-kube-api-access-dp46w\") pod \"9801ce4f-e9bf-4c09-a624-81675bbda6fa\" (UID: \"9801ce4f-e9bf-4c09-a624-81675bbda6fa\") " Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.429370 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/943f723e-defa-4cda-914e-964cdf480831-tmp\") pod \"943f723e-defa-4cda-914e-964cdf480831\" (UID: \"943f723e-defa-4cda-914e-964cdf480831\") " Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.429385 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9801ce4f-e9bf-4c09-a624-81675bbda6fa-utilities\") pod \"9801ce4f-e9bf-4c09-a624-81675bbda6fa\" (UID: \"9801ce4f-e9bf-4c09-a624-81675bbda6fa\") " Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.429404 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbrl8\" (UniqueName: \"kubernetes.io/projected/70414740-2872-4ebd-b3b5-ded149c0f019-kube-api-access-bbrl8\") pod \"70414740-2872-4ebd-b3b5-ded149c0f019\" (UID: \"70414740-2872-4ebd-b3b5-ded149c0f019\") " Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.429417 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14b81eee-396d-4e4e-a48c-87183aa677a0-utilities\") pod \"14b81eee-396d-4e4e-a48c-87183aa677a0\" (UID: \"14b81eee-396d-4e4e-a48c-87183aa677a0\") " Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.429431 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p894h\" (UniqueName: \"kubernetes.io/projected/943f723e-defa-4cda-914e-964cdf480831-kube-api-access-p894h\") pod \"943f723e-defa-4cda-914e-964cdf480831\" (UID: \"943f723e-defa-4cda-914e-964cdf480831\") " Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.429750 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/943f723e-defa-4cda-914e-964cdf480831-tmp" (OuterVolumeSpecName: "tmp") pod "943f723e-defa-4cda-914e-964cdf480831" (UID: "943f723e-defa-4cda-914e-964cdf480831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.429850 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/943f723e-defa-4cda-914e-964cdf480831-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.429868 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d799616-15c0-4e4f-8cbb-5f33d9f607ef-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.429883 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70414740-2872-4ebd-b3b5-ded149c0f019-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.429894 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5b5gm\" (UniqueName: \"kubernetes.io/projected/6d799616-15c0-4e4f-8cbb-5f33d9f607ef-kube-api-access-5b5gm\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.430222 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/943f723e-defa-4cda-914e-964cdf480831-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "943f723e-defa-4cda-914e-964cdf480831" (UID: "943f723e-defa-4cda-914e-964cdf480831"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.431063 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9801ce4f-e9bf-4c09-a624-81675bbda6fa-utilities" (OuterVolumeSpecName: "utilities") pod "9801ce4f-e9bf-4c09-a624-81675bbda6fa" (UID: "9801ce4f-e9bf-4c09-a624-81675bbda6fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.431239 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14b81eee-396d-4e4e-a48c-87183aa677a0-utilities" (OuterVolumeSpecName: "utilities") pod "14b81eee-396d-4e4e-a48c-87183aa677a0" (UID: "14b81eee-396d-4e4e-a48c-87183aa677a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.431889 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14b81eee-396d-4e4e-a48c-87183aa677a0-kube-api-access-gjmkp" (OuterVolumeSpecName: "kube-api-access-gjmkp") pod "14b81eee-396d-4e4e-a48c-87183aa677a0" (UID: "14b81eee-396d-4e4e-a48c-87183aa677a0"). InnerVolumeSpecName "kube-api-access-gjmkp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.432338 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70414740-2872-4ebd-b3b5-ded149c0f019-kube-api-access-bbrl8" (OuterVolumeSpecName: "kube-api-access-bbrl8") pod "70414740-2872-4ebd-b3b5-ded149c0f019" (UID: "70414740-2872-4ebd-b3b5-ded149c0f019"). InnerVolumeSpecName "kube-api-access-bbrl8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.432821 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/943f723e-defa-4cda-914e-964cdf480831-kube-api-access-p894h" (OuterVolumeSpecName: "kube-api-access-p894h") pod "943f723e-defa-4cda-914e-964cdf480831" (UID: "943f723e-defa-4cda-914e-964cdf480831"). InnerVolumeSpecName "kube-api-access-p894h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.432963 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/943f723e-defa-4cda-914e-964cdf480831-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "943f723e-defa-4cda-914e-964cdf480831" (UID: "943f723e-defa-4cda-914e-964cdf480831"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.433007 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9801ce4f-e9bf-4c09-a624-81675bbda6fa-kube-api-access-dp46w" (OuterVolumeSpecName: "kube-api-access-dp46w") pod "9801ce4f-e9bf-4c09-a624-81675bbda6fa" (UID: "9801ce4f-e9bf-4c09-a624-81675bbda6fa"). InnerVolumeSpecName "kube-api-access-dp46w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.446515 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70414740-2872-4ebd-b3b5-ded149c0f019-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70414740-2872-4ebd-b3b5-ded149c0f019" (UID: "70414740-2872-4ebd-b3b5-ded149c0f019"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.480552 5118 generic.go:358] "Generic (PLEG): container finished" podID="943f723e-defa-4cda-914e-964cdf480831" containerID="394573cbf46d7ceae941f13bea721343cdfbedf5b15ddddcc4c68e0017a9dcf6" exitCode=0 Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.480658 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" event={"ID":"943f723e-defa-4cda-914e-964cdf480831","Type":"ContainerDied","Data":"394573cbf46d7ceae941f13bea721343cdfbedf5b15ddddcc4c68e0017a9dcf6"} Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.481387 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" event={"ID":"943f723e-defa-4cda-914e-964cdf480831","Type":"ContainerDied","Data":"d3b1630101051fb79d1add750ac7cc08779f2fc4d3d51ed0158d8212f76727e4"} Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.481437 5118 scope.go:117] "RemoveContainer" containerID="394573cbf46d7ceae941f13bea721343cdfbedf5b15ddddcc4c68e0017a9dcf6" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.481745 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-8htc9" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.483420 5118 generic.go:358] "Generic (PLEG): container finished" podID="6d799616-15c0-4e4f-8cbb-5f33d9f607ef" containerID="ef174d642a498f3fd4f351f5d82e46cf8e375768e0ee1465453b73b288f6514a" exitCode=0 Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.483594 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7px9v" event={"ID":"6d799616-15c0-4e4f-8cbb-5f33d9f607ef","Type":"ContainerDied","Data":"ef174d642a498f3fd4f351f5d82e46cf8e375768e0ee1465453b73b288f6514a"} Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.483625 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7px9v" event={"ID":"6d799616-15c0-4e4f-8cbb-5f33d9f607ef","Type":"ContainerDied","Data":"cba87a5bd1ba6619a772d7ab1824302c196b62b48ebbda440689c9861ed90f87"} Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.483834 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7px9v" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.491147 5118 generic.go:358] "Generic (PLEG): container finished" podID="9801ce4f-e9bf-4c09-a624-81675bbda6fa" containerID="9dd6827ac05bd43f580f9f92a65b548d17fdbdf9b71940625fa81d5bfe12f109" exitCode=0 Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.491239 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cs27m" event={"ID":"9801ce4f-e9bf-4c09-a624-81675bbda6fa","Type":"ContainerDied","Data":"9dd6827ac05bd43f580f9f92a65b548d17fdbdf9b71940625fa81d5bfe12f109"} Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.491264 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cs27m" event={"ID":"9801ce4f-e9bf-4c09-a624-81675bbda6fa","Type":"ContainerDied","Data":"bfcc1263f82200d98ec2e9b37f84cf8657adb645fe1317e0c7485ba5e0baab9e"} Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.491398 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cs27m" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.494463 5118 generic.go:358] "Generic (PLEG): container finished" podID="14b81eee-396d-4e4e-a48c-87183aa677a0" containerID="348a2fc5fa3c4684add4d9aabae8590595a98431a23aec1ce43541b26493abd4" exitCode=0 Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.494593 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mlt4z" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.494617 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlt4z" event={"ID":"14b81eee-396d-4e4e-a48c-87183aa677a0","Type":"ContainerDied","Data":"348a2fc5fa3c4684add4d9aabae8590595a98431a23aec1ce43541b26493abd4"} Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.494639 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlt4z" event={"ID":"14b81eee-396d-4e4e-a48c-87183aa677a0","Type":"ContainerDied","Data":"b1b9f756116f7471aa1f88875d48f24e4842021641f718a0d0cc5a66845596b0"} Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.499573 5118 scope.go:117] "RemoveContainer" containerID="394573cbf46d7ceae941f13bea721343cdfbedf5b15ddddcc4c68e0017a9dcf6" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.499842 5118 generic.go:358] "Generic (PLEG): container finished" podID="70414740-2872-4ebd-b3b5-ded149c0f019" containerID="956962fd5a16ec4a7380033ef397e59d0176c52c75a9967ee9e5af106effc88e" exitCode=0 Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.499950 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rpxq" event={"ID":"70414740-2872-4ebd-b3b5-ded149c0f019","Type":"ContainerDied","Data":"956962fd5a16ec4a7380033ef397e59d0176c52c75a9967ee9e5af106effc88e"} Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.499993 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rpxq" event={"ID":"70414740-2872-4ebd-b3b5-ded149c0f019","Type":"ContainerDied","Data":"81cd30d77282e220f1aafdf889c356d3f8bb59a0a553eb344ee7f6809cc7ae25"} Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.500078 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8rpxq" Dec 08 19:33:50 crc kubenswrapper[5118]: E1208 19:33:50.500378 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"394573cbf46d7ceae941f13bea721343cdfbedf5b15ddddcc4c68e0017a9dcf6\": container with ID starting with 394573cbf46d7ceae941f13bea721343cdfbedf5b15ddddcc4c68e0017a9dcf6 not found: ID does not exist" containerID="394573cbf46d7ceae941f13bea721343cdfbedf5b15ddddcc4c68e0017a9dcf6" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.500420 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"394573cbf46d7ceae941f13bea721343cdfbedf5b15ddddcc4c68e0017a9dcf6"} err="failed to get container status \"394573cbf46d7ceae941f13bea721343cdfbedf5b15ddddcc4c68e0017a9dcf6\": rpc error: code = NotFound desc = could not find container \"394573cbf46d7ceae941f13bea721343cdfbedf5b15ddddcc4c68e0017a9dcf6\": container with ID starting with 394573cbf46d7ceae941f13bea721343cdfbedf5b15ddddcc4c68e0017a9dcf6 not found: ID does not exist" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.500446 5118 scope.go:117] "RemoveContainer" containerID="ef174d642a498f3fd4f351f5d82e46cf8e375768e0ee1465453b73b288f6514a" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.509327 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9801ce4f-e9bf-4c09-a624-81675bbda6fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9801ce4f-e9bf-4c09-a624-81675bbda6fa" (UID: "9801ce4f-e9bf-4c09-a624-81675bbda6fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.518871 5118 scope.go:117] "RemoveContainer" containerID="8ad5bd2c6eb14bd89234308b4521e2fa30f9b9aabd234f54bb9f3133827e90e9" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.528339 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-8htc9"] Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.530483 5118 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/943f723e-defa-4cda-914e-964cdf480831-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.530501 5118 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/943f723e-defa-4cda-914e-964cdf480831-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.530514 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dp46w\" (UniqueName: \"kubernetes.io/projected/9801ce4f-e9bf-4c09-a624-81675bbda6fa-kube-api-access-dp46w\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.530525 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9801ce4f-e9bf-4c09-a624-81675bbda6fa-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.530537 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bbrl8\" (UniqueName: \"kubernetes.io/projected/70414740-2872-4ebd-b3b5-ded149c0f019-kube-api-access-bbrl8\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.530550 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14b81eee-396d-4e4e-a48c-87183aa677a0-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.530561 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p894h\" (UniqueName: \"kubernetes.io/projected/943f723e-defa-4cda-914e-964cdf480831-kube-api-access-p894h\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.530572 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9801ce4f-e9bf-4c09-a624-81675bbda6fa-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.530583 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gjmkp\" (UniqueName: \"kubernetes.io/projected/14b81eee-396d-4e4e-a48c-87183aa677a0-kube-api-access-gjmkp\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.530595 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70414740-2872-4ebd-b3b5-ded149c0f019-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.531547 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-8htc9"] Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.539265 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8rpxq"] Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.546717 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8rpxq"] Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.556826 5118 scope.go:117] "RemoveContainer" containerID="bed79bb93d24e2d1a555ecf210feee48f52228ceb9ef28fb1551e0680bb48339" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.556939 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7px9v"] Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.561454 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7px9v"] Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.571205 5118 scope.go:117] "RemoveContainer" containerID="ef174d642a498f3fd4f351f5d82e46cf8e375768e0ee1465453b73b288f6514a" Dec 08 19:33:50 crc kubenswrapper[5118]: E1208 19:33:50.571606 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef174d642a498f3fd4f351f5d82e46cf8e375768e0ee1465453b73b288f6514a\": container with ID starting with ef174d642a498f3fd4f351f5d82e46cf8e375768e0ee1465453b73b288f6514a not found: ID does not exist" containerID="ef174d642a498f3fd4f351f5d82e46cf8e375768e0ee1465453b73b288f6514a" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.571637 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef174d642a498f3fd4f351f5d82e46cf8e375768e0ee1465453b73b288f6514a"} err="failed to get container status \"ef174d642a498f3fd4f351f5d82e46cf8e375768e0ee1465453b73b288f6514a\": rpc error: code = NotFound desc = could not find container \"ef174d642a498f3fd4f351f5d82e46cf8e375768e0ee1465453b73b288f6514a\": container with ID starting with ef174d642a498f3fd4f351f5d82e46cf8e375768e0ee1465453b73b288f6514a not found: ID does not exist" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.571659 5118 scope.go:117] "RemoveContainer" containerID="8ad5bd2c6eb14bd89234308b4521e2fa30f9b9aabd234f54bb9f3133827e90e9" Dec 08 19:33:50 crc kubenswrapper[5118]: E1208 19:33:50.571931 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ad5bd2c6eb14bd89234308b4521e2fa30f9b9aabd234f54bb9f3133827e90e9\": container with ID starting with 8ad5bd2c6eb14bd89234308b4521e2fa30f9b9aabd234f54bb9f3133827e90e9 not found: ID does not exist" containerID="8ad5bd2c6eb14bd89234308b4521e2fa30f9b9aabd234f54bb9f3133827e90e9" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.571964 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ad5bd2c6eb14bd89234308b4521e2fa30f9b9aabd234f54bb9f3133827e90e9"} err="failed to get container status \"8ad5bd2c6eb14bd89234308b4521e2fa30f9b9aabd234f54bb9f3133827e90e9\": rpc error: code = NotFound desc = could not find container \"8ad5bd2c6eb14bd89234308b4521e2fa30f9b9aabd234f54bb9f3133827e90e9\": container with ID starting with 8ad5bd2c6eb14bd89234308b4521e2fa30f9b9aabd234f54bb9f3133827e90e9 not found: ID does not exist" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.571979 5118 scope.go:117] "RemoveContainer" containerID="bed79bb93d24e2d1a555ecf210feee48f52228ceb9ef28fb1551e0680bb48339" Dec 08 19:33:50 crc kubenswrapper[5118]: E1208 19:33:50.572235 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bed79bb93d24e2d1a555ecf210feee48f52228ceb9ef28fb1551e0680bb48339\": container with ID starting with bed79bb93d24e2d1a555ecf210feee48f52228ceb9ef28fb1551e0680bb48339 not found: ID does not exist" containerID="bed79bb93d24e2d1a555ecf210feee48f52228ceb9ef28fb1551e0680bb48339" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.572259 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bed79bb93d24e2d1a555ecf210feee48f52228ceb9ef28fb1551e0680bb48339"} err="failed to get container status \"bed79bb93d24e2d1a555ecf210feee48f52228ceb9ef28fb1551e0680bb48339\": rpc error: code = NotFound desc = could not find container \"bed79bb93d24e2d1a555ecf210feee48f52228ceb9ef28fb1551e0680bb48339\": container with ID starting with bed79bb93d24e2d1a555ecf210feee48f52228ceb9ef28fb1551e0680bb48339 not found: ID does not exist" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.572272 5118 scope.go:117] "RemoveContainer" containerID="9dd6827ac05bd43f580f9f92a65b548d17fdbdf9b71940625fa81d5bfe12f109" Dec 08 19:33:50 crc kubenswrapper[5118]: E1208 19:33:50.575460 5118 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod943f723e_defa_4cda_914e_964cdf480831.slice/crio-d3b1630101051fb79d1add750ac7cc08779f2fc4d3d51ed0158d8212f76727e4\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod943f723e_defa_4cda_914e_964cdf480831.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70414740_2872_4ebd_b3b5_ded149c0f019.slice/crio-81cd30d77282e220f1aafdf889c356d3f8bb59a0a553eb344ee7f6809cc7ae25\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d799616_15c0_4e4f_8cbb_5f33d9f607ef.slice\": RecentStats: unable to find data in memory cache]" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.578342 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14b81eee-396d-4e4e-a48c-87183aa677a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14b81eee-396d-4e4e-a48c-87183aa677a0" (UID: "14b81eee-396d-4e4e-a48c-87183aa677a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.584568 5118 scope.go:117] "RemoveContainer" containerID="d39b494440ddad6c0c6d5b35616617c3710515bf3432c08a8565dafa38546bf5" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.597818 5118 scope.go:117] "RemoveContainer" containerID="40a6430fc96c6a250f5e043f76d146894da29d7d7d13546c7c964b8359716a34" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.612750 5118 scope.go:117] "RemoveContainer" containerID="9dd6827ac05bd43f580f9f92a65b548d17fdbdf9b71940625fa81d5bfe12f109" Dec 08 19:33:50 crc kubenswrapper[5118]: E1208 19:33:50.613108 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dd6827ac05bd43f580f9f92a65b548d17fdbdf9b71940625fa81d5bfe12f109\": container with ID starting with 9dd6827ac05bd43f580f9f92a65b548d17fdbdf9b71940625fa81d5bfe12f109 not found: ID does not exist" containerID="9dd6827ac05bd43f580f9f92a65b548d17fdbdf9b71940625fa81d5bfe12f109" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.613147 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dd6827ac05bd43f580f9f92a65b548d17fdbdf9b71940625fa81d5bfe12f109"} err="failed to get container status \"9dd6827ac05bd43f580f9f92a65b548d17fdbdf9b71940625fa81d5bfe12f109\": rpc error: code = NotFound desc = could not find container \"9dd6827ac05bd43f580f9f92a65b548d17fdbdf9b71940625fa81d5bfe12f109\": container with ID starting with 9dd6827ac05bd43f580f9f92a65b548d17fdbdf9b71940625fa81d5bfe12f109 not found: ID does not exist" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.613168 5118 scope.go:117] "RemoveContainer" containerID="d39b494440ddad6c0c6d5b35616617c3710515bf3432c08a8565dafa38546bf5" Dec 08 19:33:50 crc kubenswrapper[5118]: E1208 19:33:50.613420 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d39b494440ddad6c0c6d5b35616617c3710515bf3432c08a8565dafa38546bf5\": container with ID starting with d39b494440ddad6c0c6d5b35616617c3710515bf3432c08a8565dafa38546bf5 not found: ID does not exist" containerID="d39b494440ddad6c0c6d5b35616617c3710515bf3432c08a8565dafa38546bf5" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.613448 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d39b494440ddad6c0c6d5b35616617c3710515bf3432c08a8565dafa38546bf5"} err="failed to get container status \"d39b494440ddad6c0c6d5b35616617c3710515bf3432c08a8565dafa38546bf5\": rpc error: code = NotFound desc = could not find container \"d39b494440ddad6c0c6d5b35616617c3710515bf3432c08a8565dafa38546bf5\": container with ID starting with d39b494440ddad6c0c6d5b35616617c3710515bf3432c08a8565dafa38546bf5 not found: ID does not exist" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.613464 5118 scope.go:117] "RemoveContainer" containerID="40a6430fc96c6a250f5e043f76d146894da29d7d7d13546c7c964b8359716a34" Dec 08 19:33:50 crc kubenswrapper[5118]: E1208 19:33:50.613769 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40a6430fc96c6a250f5e043f76d146894da29d7d7d13546c7c964b8359716a34\": container with ID starting with 40a6430fc96c6a250f5e043f76d146894da29d7d7d13546c7c964b8359716a34 not found: ID does not exist" containerID="40a6430fc96c6a250f5e043f76d146894da29d7d7d13546c7c964b8359716a34" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.613803 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40a6430fc96c6a250f5e043f76d146894da29d7d7d13546c7c964b8359716a34"} err="failed to get container status \"40a6430fc96c6a250f5e043f76d146894da29d7d7d13546c7c964b8359716a34\": rpc error: code = NotFound desc = could not find container \"40a6430fc96c6a250f5e043f76d146894da29d7d7d13546c7c964b8359716a34\": container with ID starting with 40a6430fc96c6a250f5e043f76d146894da29d7d7d13546c7c964b8359716a34 not found: ID does not exist" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.613841 5118 scope.go:117] "RemoveContainer" containerID="348a2fc5fa3c4684add4d9aabae8590595a98431a23aec1ce43541b26493abd4" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.627950 5118 scope.go:117] "RemoveContainer" containerID="38b61a0262fa7fc94b844881e09c85dddbd382f5c3d1f341b31d4ef0abdf1cee" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.632027 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14b81eee-396d-4e4e-a48c-87183aa677a0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.648096 5118 scope.go:117] "RemoveContainer" containerID="f8f1dd839ba4fafcbbf531ee7bb3452522db6ae9b64c52f4afc318bacd83c4fa" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.670753 5118 scope.go:117] "RemoveContainer" containerID="348a2fc5fa3c4684add4d9aabae8590595a98431a23aec1ce43541b26493abd4" Dec 08 19:33:50 crc kubenswrapper[5118]: E1208 19:33:50.671274 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"348a2fc5fa3c4684add4d9aabae8590595a98431a23aec1ce43541b26493abd4\": container with ID starting with 348a2fc5fa3c4684add4d9aabae8590595a98431a23aec1ce43541b26493abd4 not found: ID does not exist" containerID="348a2fc5fa3c4684add4d9aabae8590595a98431a23aec1ce43541b26493abd4" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.671310 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"348a2fc5fa3c4684add4d9aabae8590595a98431a23aec1ce43541b26493abd4"} err="failed to get container status \"348a2fc5fa3c4684add4d9aabae8590595a98431a23aec1ce43541b26493abd4\": rpc error: code = NotFound desc = could not find container \"348a2fc5fa3c4684add4d9aabae8590595a98431a23aec1ce43541b26493abd4\": container with ID starting with 348a2fc5fa3c4684add4d9aabae8590595a98431a23aec1ce43541b26493abd4 not found: ID does not exist" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.671339 5118 scope.go:117] "RemoveContainer" containerID="38b61a0262fa7fc94b844881e09c85dddbd382f5c3d1f341b31d4ef0abdf1cee" Dec 08 19:33:50 crc kubenswrapper[5118]: E1208 19:33:50.671977 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38b61a0262fa7fc94b844881e09c85dddbd382f5c3d1f341b31d4ef0abdf1cee\": container with ID starting with 38b61a0262fa7fc94b844881e09c85dddbd382f5c3d1f341b31d4ef0abdf1cee not found: ID does not exist" containerID="38b61a0262fa7fc94b844881e09c85dddbd382f5c3d1f341b31d4ef0abdf1cee" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.672011 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38b61a0262fa7fc94b844881e09c85dddbd382f5c3d1f341b31d4ef0abdf1cee"} err="failed to get container status \"38b61a0262fa7fc94b844881e09c85dddbd382f5c3d1f341b31d4ef0abdf1cee\": rpc error: code = NotFound desc = could not find container \"38b61a0262fa7fc94b844881e09c85dddbd382f5c3d1f341b31d4ef0abdf1cee\": container with ID starting with 38b61a0262fa7fc94b844881e09c85dddbd382f5c3d1f341b31d4ef0abdf1cee not found: ID does not exist" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.672036 5118 scope.go:117] "RemoveContainer" containerID="f8f1dd839ba4fafcbbf531ee7bb3452522db6ae9b64c52f4afc318bacd83c4fa" Dec 08 19:33:50 crc kubenswrapper[5118]: E1208 19:33:50.672404 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8f1dd839ba4fafcbbf531ee7bb3452522db6ae9b64c52f4afc318bacd83c4fa\": container with ID starting with f8f1dd839ba4fafcbbf531ee7bb3452522db6ae9b64c52f4afc318bacd83c4fa not found: ID does not exist" containerID="f8f1dd839ba4fafcbbf531ee7bb3452522db6ae9b64c52f4afc318bacd83c4fa" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.672456 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8f1dd839ba4fafcbbf531ee7bb3452522db6ae9b64c52f4afc318bacd83c4fa"} err="failed to get container status \"f8f1dd839ba4fafcbbf531ee7bb3452522db6ae9b64c52f4afc318bacd83c4fa\": rpc error: code = NotFound desc = could not find container \"f8f1dd839ba4fafcbbf531ee7bb3452522db6ae9b64c52f4afc318bacd83c4fa\": container with ID starting with f8f1dd839ba4fafcbbf531ee7bb3452522db6ae9b64c52f4afc318bacd83c4fa not found: ID does not exist" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.672487 5118 scope.go:117] "RemoveContainer" containerID="956962fd5a16ec4a7380033ef397e59d0176c52c75a9967ee9e5af106effc88e" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.690040 5118 scope.go:117] "RemoveContainer" containerID="629a6e05d4deaf42e44711007fa5fdb2583b87e8cb1fcd8ca3e8156ed75df6be" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.705766 5118 scope.go:117] "RemoveContainer" containerID="898620bd70fd3a06031511716724a97b64124bf95677abd2479ecec31949844f" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.717906 5118 scope.go:117] "RemoveContainer" containerID="956962fd5a16ec4a7380033ef397e59d0176c52c75a9967ee9e5af106effc88e" Dec 08 19:33:50 crc kubenswrapper[5118]: E1208 19:33:50.718589 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"956962fd5a16ec4a7380033ef397e59d0176c52c75a9967ee9e5af106effc88e\": container with ID starting with 956962fd5a16ec4a7380033ef397e59d0176c52c75a9967ee9e5af106effc88e not found: ID does not exist" containerID="956962fd5a16ec4a7380033ef397e59d0176c52c75a9967ee9e5af106effc88e" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.718642 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"956962fd5a16ec4a7380033ef397e59d0176c52c75a9967ee9e5af106effc88e"} err="failed to get container status \"956962fd5a16ec4a7380033ef397e59d0176c52c75a9967ee9e5af106effc88e\": rpc error: code = NotFound desc = could not find container \"956962fd5a16ec4a7380033ef397e59d0176c52c75a9967ee9e5af106effc88e\": container with ID starting with 956962fd5a16ec4a7380033ef397e59d0176c52c75a9967ee9e5af106effc88e not found: ID does not exist" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.718681 5118 scope.go:117] "RemoveContainer" containerID="629a6e05d4deaf42e44711007fa5fdb2583b87e8cb1fcd8ca3e8156ed75df6be" Dec 08 19:33:50 crc kubenswrapper[5118]: E1208 19:33:50.719203 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"629a6e05d4deaf42e44711007fa5fdb2583b87e8cb1fcd8ca3e8156ed75df6be\": container with ID starting with 629a6e05d4deaf42e44711007fa5fdb2583b87e8cb1fcd8ca3e8156ed75df6be not found: ID does not exist" containerID="629a6e05d4deaf42e44711007fa5fdb2583b87e8cb1fcd8ca3e8156ed75df6be" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.719225 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"629a6e05d4deaf42e44711007fa5fdb2583b87e8cb1fcd8ca3e8156ed75df6be"} err="failed to get container status \"629a6e05d4deaf42e44711007fa5fdb2583b87e8cb1fcd8ca3e8156ed75df6be\": rpc error: code = NotFound desc = could not find container \"629a6e05d4deaf42e44711007fa5fdb2583b87e8cb1fcd8ca3e8156ed75df6be\": container with ID starting with 629a6e05d4deaf42e44711007fa5fdb2583b87e8cb1fcd8ca3e8156ed75df6be not found: ID does not exist" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.719240 5118 scope.go:117] "RemoveContainer" containerID="898620bd70fd3a06031511716724a97b64124bf95677abd2479ecec31949844f" Dec 08 19:33:50 crc kubenswrapper[5118]: E1208 19:33:50.719499 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"898620bd70fd3a06031511716724a97b64124bf95677abd2479ecec31949844f\": container with ID starting with 898620bd70fd3a06031511716724a97b64124bf95677abd2479ecec31949844f not found: ID does not exist" containerID="898620bd70fd3a06031511716724a97b64124bf95677abd2479ecec31949844f" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.719547 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"898620bd70fd3a06031511716724a97b64124bf95677abd2479ecec31949844f"} err="failed to get container status \"898620bd70fd3a06031511716724a97b64124bf95677abd2479ecec31949844f\": rpc error: code = NotFound desc = could not find container \"898620bd70fd3a06031511716724a97b64124bf95677abd2479ecec31949844f\": container with ID starting with 898620bd70fd3a06031511716724a97b64124bf95677abd2479ecec31949844f not found: ID does not exist" Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.821508 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cs27m"] Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.824412 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cs27m"] Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.831170 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mlt4z"] Dec 08 19:33:50 crc kubenswrapper[5118]: I1208 19:33:50.835490 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mlt4z"] Dec 08 19:33:52 crc kubenswrapper[5118]: I1208 19:33:52.105338 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14b81eee-396d-4e4e-a48c-87183aa677a0" path="/var/lib/kubelet/pods/14b81eee-396d-4e4e-a48c-87183aa677a0/volumes" Dec 08 19:33:52 crc kubenswrapper[5118]: I1208 19:33:52.107821 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d799616-15c0-4e4f-8cbb-5f33d9f607ef" path="/var/lib/kubelet/pods/6d799616-15c0-4e4f-8cbb-5f33d9f607ef/volumes" Dec 08 19:33:52 crc kubenswrapper[5118]: I1208 19:33:52.109003 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70414740-2872-4ebd-b3b5-ded149c0f019" path="/var/lib/kubelet/pods/70414740-2872-4ebd-b3b5-ded149c0f019/volumes" Dec 08 19:33:52 crc kubenswrapper[5118]: I1208 19:33:52.111018 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="943f723e-defa-4cda-914e-964cdf480831" path="/var/lib/kubelet/pods/943f723e-defa-4cda-914e-964cdf480831/volumes" Dec 08 19:33:52 crc kubenswrapper[5118]: I1208 19:33:52.111986 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9801ce4f-e9bf-4c09-a624-81675bbda6fa" path="/var/lib/kubelet/pods/9801ce4f-e9bf-4c09-a624-81675bbda6fa/volumes" Dec 08 19:33:54 crc kubenswrapper[5118]: I1208 19:33:54.531155 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 19:33:54 crc kubenswrapper[5118]: I1208 19:33:54.533250 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:33:54 crc kubenswrapper[5118]: I1208 19:33:54.533293 5118 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="6e594ca61541bca3ffd1b0d815825b9e1c94a6070d071e87381dae2c1664e59b" exitCode=137 Dec 08 19:33:54 crc kubenswrapper[5118]: I1208 19:33:54.533518 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"6e594ca61541bca3ffd1b0d815825b9e1c94a6070d071e87381dae2c1664e59b"} Dec 08 19:33:54 crc kubenswrapper[5118]: I1208 19:33:54.533581 5118 scope.go:117] "RemoveContainer" containerID="ed5e8a16f16345b28c7907efe04e4b3856cbade55bdb538fc7f3790a7e71d583" Dec 08 19:33:55 crc kubenswrapper[5118]: I1208 19:33:55.539626 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 19:33:55 crc kubenswrapper[5118]: I1208 19:33:55.541169 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"5783b1a6814882fc4952d55f42c5627ac977df8b4ca6161d4643d392b8de60c2"} Dec 08 19:33:58 crc kubenswrapper[5118]: I1208 19:33:58.873089 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:33:59 crc kubenswrapper[5118]: I1208 19:33:59.013244 5118 ???:1] "http: TLS handshake error from 192.168.126.11:50126: no serving certificate available for the kubelet" Dec 08 19:34:04 crc kubenswrapper[5118]: I1208 19:34:04.008359 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:34:04 crc kubenswrapper[5118]: I1208 19:34:04.014341 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:34:04 crc kubenswrapper[5118]: I1208 19:34:04.601007 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:34:08 crc kubenswrapper[5118]: I1208 19:34:08.238261 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 19:34:08 crc kubenswrapper[5118]: I1208 19:34:08.239114 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 19:34:09 crc kubenswrapper[5118]: I1208 19:34:09.467642 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:34:09 crc kubenswrapper[5118]: I1208 19:34:09.467736 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:34:09 crc kubenswrapper[5118]: I1208 19:34:09.467799 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:34:09 crc kubenswrapper[5118]: I1208 19:34:09.468259 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92647ff13fb1d82844fdc1c78fadbe5a9f72de51c235d82acb429790753aa73b"} pod="openshift-machine-config-operator/machine-config-daemon-twnt9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:34:09 crc kubenswrapper[5118]: I1208 19:34:09.468309 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" containerID="cri-o://92647ff13fb1d82844fdc1c78fadbe5a9f72de51c235d82acb429790753aa73b" gracePeriod=600 Dec 08 19:34:09 crc kubenswrapper[5118]: I1208 19:34:09.609124 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 19:34:09 crc kubenswrapper[5118]: I1208 19:34:09.621572 5118 generic.go:358] "Generic (PLEG): container finished" podID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerID="92647ff13fb1d82844fdc1c78fadbe5a9f72de51c235d82acb429790753aa73b" exitCode=0 Dec 08 19:34:09 crc kubenswrapper[5118]: I1208 19:34:09.621612 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" event={"ID":"0052f7cb-2eab-42e7-8f98-b1544811d9c3","Type":"ContainerDied","Data":"92647ff13fb1d82844fdc1c78fadbe5a9f72de51c235d82acb429790753aa73b"} Dec 08 19:34:10 crc kubenswrapper[5118]: I1208 19:34:10.634143 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" event={"ID":"0052f7cb-2eab-42e7-8f98-b1544811d9c3","Type":"ContainerStarted","Data":"9a83000e5f1454b2084ae52cb60bef4a8eb4e2dca054391d550af658c8371fed"} Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.314315 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-kgtcw"] Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315532 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="943f723e-defa-4cda-914e-964cdf480831" containerName="marketplace-operator" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315550 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="943f723e-defa-4cda-914e-964cdf480831" containerName="marketplace-operator" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315572 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="70414740-2872-4ebd-b3b5-ded149c0f019" containerName="extract-utilities" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315580 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="70414740-2872-4ebd-b3b5-ded149c0f019" containerName="extract-utilities" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315591 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="14b81eee-396d-4e4e-a48c-87183aa677a0" containerName="registry-server" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315599 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b81eee-396d-4e4e-a48c-87183aa677a0" containerName="registry-server" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315611 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9801ce4f-e9bf-4c09-a624-81675bbda6fa" containerName="extract-utilities" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315618 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="9801ce4f-e9bf-4c09-a624-81675bbda6fa" containerName="extract-utilities" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315632 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9801ce4f-e9bf-4c09-a624-81675bbda6fa" containerName="extract-content" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315639 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="9801ce4f-e9bf-4c09-a624-81675bbda6fa" containerName="extract-content" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315651 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6d799616-15c0-4e4f-8cbb-5f33d9f607ef" containerName="extract-content" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315658 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d799616-15c0-4e4f-8cbb-5f33d9f607ef" containerName="extract-content" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315671 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="70414740-2872-4ebd-b3b5-ded149c0f019" containerName="extract-content" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315678 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="70414740-2872-4ebd-b3b5-ded149c0f019" containerName="extract-content" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315702 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6d799616-15c0-4e4f-8cbb-5f33d9f607ef" containerName="registry-server" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315710 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d799616-15c0-4e4f-8cbb-5f33d9f607ef" containerName="registry-server" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315720 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="14b81eee-396d-4e4e-a48c-87183aa677a0" containerName="extract-utilities" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315727 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b81eee-396d-4e4e-a48c-87183aa677a0" containerName="extract-utilities" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315739 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="70414740-2872-4ebd-b3b5-ded149c0f019" containerName="registry-server" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315774 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="70414740-2872-4ebd-b3b5-ded149c0f019" containerName="registry-server" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315786 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6d799616-15c0-4e4f-8cbb-5f33d9f607ef" containerName="extract-utilities" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315794 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d799616-15c0-4e4f-8cbb-5f33d9f607ef" containerName="extract-utilities" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315804 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9b7dde72-b320-47ca-af99-98eee388ad8d" containerName="installer" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315810 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7dde72-b320-47ca-af99-98eee388ad8d" containerName="installer" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315819 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9801ce4f-e9bf-4c09-a624-81675bbda6fa" containerName="registry-server" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315826 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="9801ce4f-e9bf-4c09-a624-81675bbda6fa" containerName="registry-server" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315860 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="14b81eee-396d-4e4e-a48c-87183aa677a0" containerName="extract-content" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315868 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b81eee-396d-4e4e-a48c-87183aa677a0" containerName="extract-content" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315876 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315883 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.315988 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="70414740-2872-4ebd-b3b5-ded149c0f019" containerName="registry-server" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.316000 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.316012 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="14b81eee-396d-4e4e-a48c-87183aa677a0" containerName="registry-server" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.316022 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="6d799616-15c0-4e4f-8cbb-5f33d9f607ef" containerName="registry-server" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.316032 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="9801ce4f-e9bf-4c09-a624-81675bbda6fa" containerName="registry-server" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.316042 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="9b7dde72-b320-47ca-af99-98eee388ad8d" containerName="installer" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.316055 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="943f723e-defa-4cda-914e-964cdf480831" containerName="marketplace-operator" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.608037 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-kgtcw"] Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.608074 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-kk4vd"] Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.608089 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv"] Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.608361 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" podUID="d5ad6856-ba98-4f91-b102-7e41020e2ecf" containerName="route-controller-manager" containerID="cri-o://c4d226709fcb65ffa641d9fc169448e1042e2236994196c7d2a5aa5db041021f" gracePeriod=30 Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.608401 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-kgtcw" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.608665 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" podUID="88131373-e414-436f-83e1-9d4aa4b55f62" containerName="controller-manager" containerID="cri-o://814721a261b533a9a86b78e69965b4a15f6541f65a800373fd10ca72e3e5b7d2" gracePeriod=30 Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.611068 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.611767 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.613974 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.614179 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.625145 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.746451 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7a8ec897-09aa-4885-9bc7-142379bb368e-tmp\") pod \"marketplace-operator-547dbd544d-kgtcw\" (UID: \"7a8ec897-09aa-4885-9bc7-142379bb368e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kgtcw" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.746527 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtbxm\" (UniqueName: \"kubernetes.io/projected/7a8ec897-09aa-4885-9bc7-142379bb368e-kube-api-access-wtbxm\") pod \"marketplace-operator-547dbd544d-kgtcw\" (UID: \"7a8ec897-09aa-4885-9bc7-142379bb368e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kgtcw" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.746629 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7a8ec897-09aa-4885-9bc7-142379bb368e-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-kgtcw\" (UID: \"7a8ec897-09aa-4885-9bc7-142379bb368e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kgtcw" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.746675 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7a8ec897-09aa-4885-9bc7-142379bb368e-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-kgtcw\" (UID: \"7a8ec897-09aa-4885-9bc7-142379bb368e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kgtcw" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.847924 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7a8ec897-09aa-4885-9bc7-142379bb368e-tmp\") pod \"marketplace-operator-547dbd544d-kgtcw\" (UID: \"7a8ec897-09aa-4885-9bc7-142379bb368e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kgtcw" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.847962 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wtbxm\" (UniqueName: \"kubernetes.io/projected/7a8ec897-09aa-4885-9bc7-142379bb368e-kube-api-access-wtbxm\") pod \"marketplace-operator-547dbd544d-kgtcw\" (UID: \"7a8ec897-09aa-4885-9bc7-142379bb368e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kgtcw" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.848020 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7a8ec897-09aa-4885-9bc7-142379bb368e-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-kgtcw\" (UID: \"7a8ec897-09aa-4885-9bc7-142379bb368e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kgtcw" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.848051 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7a8ec897-09aa-4885-9bc7-142379bb368e-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-kgtcw\" (UID: \"7a8ec897-09aa-4885-9bc7-142379bb368e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kgtcw" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.848485 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7a8ec897-09aa-4885-9bc7-142379bb368e-tmp\") pod \"marketplace-operator-547dbd544d-kgtcw\" (UID: \"7a8ec897-09aa-4885-9bc7-142379bb368e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kgtcw" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.849294 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7a8ec897-09aa-4885-9bc7-142379bb368e-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-kgtcw\" (UID: \"7a8ec897-09aa-4885-9bc7-142379bb368e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kgtcw" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.861071 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7a8ec897-09aa-4885-9bc7-142379bb368e-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-kgtcw\" (UID: \"7a8ec897-09aa-4885-9bc7-142379bb368e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kgtcw" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.869794 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtbxm\" (UniqueName: \"kubernetes.io/projected/7a8ec897-09aa-4885-9bc7-142379bb368e-kube-api-access-wtbxm\") pod \"marketplace-operator-547dbd544d-kgtcw\" (UID: \"7a8ec897-09aa-4885-9bc7-142379bb368e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kgtcw" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.932506 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-kgtcw" Dec 08 19:34:15 crc kubenswrapper[5118]: I1208 19:34:15.989164 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.003778 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.025763 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv"] Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.028200 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d5ad6856-ba98-4f91-b102-7e41020e2ecf" containerName="route-controller-manager" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.028224 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ad6856-ba98-4f91-b102-7e41020e2ecf" containerName="route-controller-manager" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.028233 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="88131373-e414-436f-83e1-9d4aa4b55f62" containerName="controller-manager" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.028972 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="88131373-e414-436f-83e1-9d4aa4b55f62" containerName="controller-manager" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.029088 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="d5ad6856-ba98-4f91-b102-7e41020e2ecf" containerName="route-controller-manager" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.029102 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="88131373-e414-436f-83e1-9d4aa4b55f62" containerName="controller-manager" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.044876 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.052220 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv"] Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.055458 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f88f547c4-fwlzv"] Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.060426 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.063079 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f88f547c4-fwlzv"] Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.153386 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ad6856-ba98-4f91-b102-7e41020e2ecf-config\") pod \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.153420 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88131373-e414-436f-83e1-9d4aa4b55f62-proxy-ca-bundles\") pod \"88131373-e414-436f-83e1-9d4aa4b55f62\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.153511 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88131373-e414-436f-83e1-9d4aa4b55f62-serving-cert\") pod \"88131373-e414-436f-83e1-9d4aa4b55f62\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.153580 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5ad6856-ba98-4f91-b102-7e41020e2ecf-client-ca\") pod \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.153616 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ad6856-ba98-4f91-b102-7e41020e2ecf-serving-cert\") pod \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.153656 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88131373-e414-436f-83e1-9d4aa4b55f62-tmp\") pod \"88131373-e414-436f-83e1-9d4aa4b55f62\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.153670 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d5ad6856-ba98-4f91-b102-7e41020e2ecf-tmp\") pod \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.153709 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88131373-e414-436f-83e1-9d4aa4b55f62-config\") pod \"88131373-e414-436f-83e1-9d4aa4b55f62\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.153729 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88131373-e414-436f-83e1-9d4aa4b55f62-client-ca\") pod \"88131373-e414-436f-83e1-9d4aa4b55f62\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.153754 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78g6q\" (UniqueName: \"kubernetes.io/projected/d5ad6856-ba98-4f91-b102-7e41020e2ecf-kube-api-access-78g6q\") pod \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\" (UID: \"d5ad6856-ba98-4f91-b102-7e41020e2ecf\") " Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.153774 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgz5w\" (UniqueName: \"kubernetes.io/projected/88131373-e414-436f-83e1-9d4aa4b55f62-kube-api-access-qgz5w\") pod \"88131373-e414-436f-83e1-9d4aa4b55f62\" (UID: \"88131373-e414-436f-83e1-9d4aa4b55f62\") " Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.153894 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-config\") pod \"route-controller-manager-6785fbd6d-xzvhv\" (UID: \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\") " pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.153910 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-client-ca\") pod \"route-controller-manager-6785fbd6d-xzvhv\" (UID: \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\") " pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.153936 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4w9q\" (UniqueName: \"kubernetes.io/projected/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-kube-api-access-f4w9q\") pod \"route-controller-manager-6785fbd6d-xzvhv\" (UID: \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\") " pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.153969 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-serving-cert\") pod \"route-controller-manager-6785fbd6d-xzvhv\" (UID: \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\") " pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.153992 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-tmp\") pod \"route-controller-manager-6785fbd6d-xzvhv\" (UID: \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\") " pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.154313 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5ad6856-ba98-4f91-b102-7e41020e2ecf-config" (OuterVolumeSpecName: "config") pod "d5ad6856-ba98-4f91-b102-7e41020e2ecf" (UID: "d5ad6856-ba98-4f91-b102-7e41020e2ecf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.154802 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88131373-e414-436f-83e1-9d4aa4b55f62-client-ca" (OuterVolumeSpecName: "client-ca") pod "88131373-e414-436f-83e1-9d4aa4b55f62" (UID: "88131373-e414-436f-83e1-9d4aa4b55f62"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.154833 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88131373-e414-436f-83e1-9d4aa4b55f62-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "88131373-e414-436f-83e1-9d4aa4b55f62" (UID: "88131373-e414-436f-83e1-9d4aa4b55f62"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.155565 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88131373-e414-436f-83e1-9d4aa4b55f62-tmp" (OuterVolumeSpecName: "tmp") pod "88131373-e414-436f-83e1-9d4aa4b55f62" (UID: "88131373-e414-436f-83e1-9d4aa4b55f62"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.155934 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5ad6856-ba98-4f91-b102-7e41020e2ecf-client-ca" (OuterVolumeSpecName: "client-ca") pod "d5ad6856-ba98-4f91-b102-7e41020e2ecf" (UID: "d5ad6856-ba98-4f91-b102-7e41020e2ecf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.156884 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ad6856-ba98-4f91-b102-7e41020e2ecf-tmp" (OuterVolumeSpecName: "tmp") pod "d5ad6856-ba98-4f91-b102-7e41020e2ecf" (UID: "d5ad6856-ba98-4f91-b102-7e41020e2ecf"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.157000 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88131373-e414-436f-83e1-9d4aa4b55f62-config" (OuterVolumeSpecName: "config") pod "88131373-e414-436f-83e1-9d4aa4b55f62" (UID: "88131373-e414-436f-83e1-9d4aa4b55f62"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.158726 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-kgtcw"] Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.159711 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88131373-e414-436f-83e1-9d4aa4b55f62-kube-api-access-qgz5w" (OuterVolumeSpecName: "kube-api-access-qgz5w") pod "88131373-e414-436f-83e1-9d4aa4b55f62" (UID: "88131373-e414-436f-83e1-9d4aa4b55f62"). InnerVolumeSpecName "kube-api-access-qgz5w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.159882 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ad6856-ba98-4f91-b102-7e41020e2ecf-kube-api-access-78g6q" (OuterVolumeSpecName: "kube-api-access-78g6q") pod "d5ad6856-ba98-4f91-b102-7e41020e2ecf" (UID: "d5ad6856-ba98-4f91-b102-7e41020e2ecf"). InnerVolumeSpecName "kube-api-access-78g6q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.159744 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88131373-e414-436f-83e1-9d4aa4b55f62-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "88131373-e414-436f-83e1-9d4aa4b55f62" (UID: "88131373-e414-436f-83e1-9d4aa4b55f62"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.162986 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ad6856-ba98-4f91-b102-7e41020e2ecf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d5ad6856-ba98-4f91-b102-7e41020e2ecf" (UID: "d5ad6856-ba98-4f91-b102-7e41020e2ecf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.254647 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-serving-cert\") pod \"route-controller-manager-6785fbd6d-xzvhv\" (UID: \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\") " pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255076 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-tmp\") pod \"route-controller-manager-6785fbd6d-xzvhv\" (UID: \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\") " pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255114 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a054f261-c62f-4b27-918f-ea4ff6432d66-client-ca\") pod \"controller-manager-5f88f547c4-fwlzv\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255157 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a054f261-c62f-4b27-918f-ea4ff6432d66-config\") pod \"controller-manager-5f88f547c4-fwlzv\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255208 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a054f261-c62f-4b27-918f-ea4ff6432d66-tmp\") pod \"controller-manager-5f88f547c4-fwlzv\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255271 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a054f261-c62f-4b27-918f-ea4ff6432d66-proxy-ca-bundles\") pod \"controller-manager-5f88f547c4-fwlzv\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255287 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9jfl\" (UniqueName: \"kubernetes.io/projected/a054f261-c62f-4b27-918f-ea4ff6432d66-kube-api-access-p9jfl\") pod \"controller-manager-5f88f547c4-fwlzv\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255312 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a054f261-c62f-4b27-918f-ea4ff6432d66-serving-cert\") pod \"controller-manager-5f88f547c4-fwlzv\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255333 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-config\") pod \"route-controller-manager-6785fbd6d-xzvhv\" (UID: \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\") " pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255349 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-client-ca\") pod \"route-controller-manager-6785fbd6d-xzvhv\" (UID: \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\") " pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255375 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f4w9q\" (UniqueName: \"kubernetes.io/projected/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-kube-api-access-f4w9q\") pod \"route-controller-manager-6785fbd6d-xzvhv\" (UID: \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\") " pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255422 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88131373-e414-436f-83e1-9d4aa4b55f62-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255434 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5ad6856-ba98-4f91-b102-7e41020e2ecf-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255443 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ad6856-ba98-4f91-b102-7e41020e2ecf-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255452 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88131373-e414-436f-83e1-9d4aa4b55f62-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255460 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d5ad6856-ba98-4f91-b102-7e41020e2ecf-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255467 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88131373-e414-436f-83e1-9d4aa4b55f62-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255475 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88131373-e414-436f-83e1-9d4aa4b55f62-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255483 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-78g6q\" (UniqueName: \"kubernetes.io/projected/d5ad6856-ba98-4f91-b102-7e41020e2ecf-kube-api-access-78g6q\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255491 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgz5w\" (UniqueName: \"kubernetes.io/projected/88131373-e414-436f-83e1-9d4aa4b55f62-kube-api-access-qgz5w\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255501 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ad6856-ba98-4f91-b102-7e41020e2ecf-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.255510 5118 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88131373-e414-436f-83e1-9d4aa4b55f62-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.256151 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-tmp\") pod \"route-controller-manager-6785fbd6d-xzvhv\" (UID: \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\") " pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.256585 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-client-ca\") pod \"route-controller-manager-6785fbd6d-xzvhv\" (UID: \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\") " pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.256831 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-config\") pod \"route-controller-manager-6785fbd6d-xzvhv\" (UID: \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\") " pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.264946 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-serving-cert\") pod \"route-controller-manager-6785fbd6d-xzvhv\" (UID: \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\") " pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.272411 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4w9q\" (UniqueName: \"kubernetes.io/projected/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-kube-api-access-f4w9q\") pod \"route-controller-manager-6785fbd6d-xzvhv\" (UID: \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\") " pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.356868 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a054f261-c62f-4b27-918f-ea4ff6432d66-proxy-ca-bundles\") pod \"controller-manager-5f88f547c4-fwlzv\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.356913 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p9jfl\" (UniqueName: \"kubernetes.io/projected/a054f261-c62f-4b27-918f-ea4ff6432d66-kube-api-access-p9jfl\") pod \"controller-manager-5f88f547c4-fwlzv\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.356931 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a054f261-c62f-4b27-918f-ea4ff6432d66-serving-cert\") pod \"controller-manager-5f88f547c4-fwlzv\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.357005 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a054f261-c62f-4b27-918f-ea4ff6432d66-client-ca\") pod \"controller-manager-5f88f547c4-fwlzv\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.357050 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a054f261-c62f-4b27-918f-ea4ff6432d66-config\") pod \"controller-manager-5f88f547c4-fwlzv\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.357096 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a054f261-c62f-4b27-918f-ea4ff6432d66-tmp\") pod \"controller-manager-5f88f547c4-fwlzv\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.358084 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a054f261-c62f-4b27-918f-ea4ff6432d66-proxy-ca-bundles\") pod \"controller-manager-5f88f547c4-fwlzv\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.358345 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a054f261-c62f-4b27-918f-ea4ff6432d66-config\") pod \"controller-manager-5f88f547c4-fwlzv\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.358885 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a054f261-c62f-4b27-918f-ea4ff6432d66-client-ca\") pod \"controller-manager-5f88f547c4-fwlzv\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.359165 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a054f261-c62f-4b27-918f-ea4ff6432d66-tmp\") pod \"controller-manager-5f88f547c4-fwlzv\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.362445 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a054f261-c62f-4b27-918f-ea4ff6432d66-serving-cert\") pod \"controller-manager-5f88f547c4-fwlzv\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.376652 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9jfl\" (UniqueName: \"kubernetes.io/projected/a054f261-c62f-4b27-918f-ea4ff6432d66-kube-api-access-p9jfl\") pod \"controller-manager-5f88f547c4-fwlzv\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.380888 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.389903 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.559732 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv"] Dec 08 19:34:16 crc kubenswrapper[5118]: W1208 19:34:16.569313 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6a95f5e_c69d_47bf_912b_2c0e1ef1bfe6.slice/crio-4ec350facc54e4109d6204e1c7982be091e8713fff17868cd1dc45d820eb8184 WatchSource:0}: Error finding container 4ec350facc54e4109d6204e1c7982be091e8713fff17868cd1dc45d820eb8184: Status 404 returned error can't find the container with id 4ec350facc54e4109d6204e1c7982be091e8713fff17868cd1dc45d820eb8184 Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.597612 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f88f547c4-fwlzv"] Dec 08 19:34:16 crc kubenswrapper[5118]: W1208 19:34:16.602473 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda054f261_c62f_4b27_918f_ea4ff6432d66.slice/crio-307807f572826dfd3060753311e7d2798eb0317be15891eb1e3757cfde59f1dc WatchSource:0}: Error finding container 307807f572826dfd3060753311e7d2798eb0317be15891eb1e3757cfde59f1dc: Status 404 returned error can't find the container with id 307807f572826dfd3060753311e7d2798eb0317be15891eb1e3757cfde59f1dc Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.672725 5118 generic.go:358] "Generic (PLEG): container finished" podID="d5ad6856-ba98-4f91-b102-7e41020e2ecf" containerID="c4d226709fcb65ffa641d9fc169448e1042e2236994196c7d2a5aa5db041021f" exitCode=0 Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.672869 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.673127 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" event={"ID":"d5ad6856-ba98-4f91-b102-7e41020e2ecf","Type":"ContainerDied","Data":"c4d226709fcb65ffa641d9fc169448e1042e2236994196c7d2a5aa5db041021f"} Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.673237 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv" event={"ID":"d5ad6856-ba98-4f91-b102-7e41020e2ecf","Type":"ContainerDied","Data":"5c6166455162962e418f51aacf38cd16ec252eb4b0379d6c660c0fbb98d44618"} Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.673279 5118 scope.go:117] "RemoveContainer" containerID="c4d226709fcb65ffa641d9fc169448e1042e2236994196c7d2a5aa5db041021f" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.674917 5118 generic.go:358] "Generic (PLEG): container finished" podID="88131373-e414-436f-83e1-9d4aa4b55f62" containerID="814721a261b533a9a86b78e69965b4a15f6541f65a800373fd10ca72e3e5b7d2" exitCode=0 Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.675014 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" event={"ID":"88131373-e414-436f-83e1-9d4aa4b55f62","Type":"ContainerDied","Data":"814721a261b533a9a86b78e69965b4a15f6541f65a800373fd10ca72e3e5b7d2"} Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.675051 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" event={"ID":"88131373-e414-436f-83e1-9d4aa4b55f62","Type":"ContainerDied","Data":"e9c3dd7f772d0be447fca63666f164bc76da266d44dcae52e31e059a50659a1a"} Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.675019 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-kk4vd" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.681391 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" event={"ID":"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6","Type":"ContainerStarted","Data":"4ec350facc54e4109d6204e1c7982be091e8713fff17868cd1dc45d820eb8184"} Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.683331 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-kgtcw" event={"ID":"7a8ec897-09aa-4885-9bc7-142379bb368e","Type":"ContainerStarted","Data":"3897fb909793aa67fd1e0101a7bf6c2acfa062a20a43aaab0749517b163ae758"} Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.683355 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-kgtcw" event={"ID":"7a8ec897-09aa-4885-9bc7-142379bb368e","Type":"ContainerStarted","Data":"aecde8a105039c0f1b17f3b8530bc9a52ee740a9dd227fa8aae07d7cd33ef577"} Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.683772 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-kgtcw" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.686480 5118 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-kgtcw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.686525 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-kgtcw" podUID="7a8ec897-09aa-4885-9bc7-142379bb368e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.686834 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" event={"ID":"a054f261-c62f-4b27-918f-ea4ff6432d66","Type":"ContainerStarted","Data":"307807f572826dfd3060753311e7d2798eb0317be15891eb1e3757cfde59f1dc"} Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.698430 5118 scope.go:117] "RemoveContainer" containerID="c4d226709fcb65ffa641d9fc169448e1042e2236994196c7d2a5aa5db041021f" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.700074 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-kgtcw" podStartSLOduration=1.7000625139999999 podStartE2EDuration="1.700062514s" podCreationTimestamp="2025-12-08 19:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:34:16.697607117 +0000 UTC m=+308.990452594" watchObservedRunningTime="2025-12-08 19:34:16.700062514 +0000 UTC m=+308.992907971" Dec 08 19:34:16 crc kubenswrapper[5118]: E1208 19:34:16.700262 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4d226709fcb65ffa641d9fc169448e1042e2236994196c7d2a5aa5db041021f\": container with ID starting with c4d226709fcb65ffa641d9fc169448e1042e2236994196c7d2a5aa5db041021f not found: ID does not exist" containerID="c4d226709fcb65ffa641d9fc169448e1042e2236994196c7d2a5aa5db041021f" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.700377 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4d226709fcb65ffa641d9fc169448e1042e2236994196c7d2a5aa5db041021f"} err="failed to get container status \"c4d226709fcb65ffa641d9fc169448e1042e2236994196c7d2a5aa5db041021f\": rpc error: code = NotFound desc = could not find container \"c4d226709fcb65ffa641d9fc169448e1042e2236994196c7d2a5aa5db041021f\": container with ID starting with c4d226709fcb65ffa641d9fc169448e1042e2236994196c7d2a5aa5db041021f not found: ID does not exist" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.700530 5118 scope.go:117] "RemoveContainer" containerID="814721a261b533a9a86b78e69965b4a15f6541f65a800373fd10ca72e3e5b7d2" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.719411 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-kk4vd"] Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.728246 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-kk4vd"] Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.733090 5118 scope.go:117] "RemoveContainer" containerID="814721a261b533a9a86b78e69965b4a15f6541f65a800373fd10ca72e3e5b7d2" Dec 08 19:34:16 crc kubenswrapper[5118]: E1208 19:34:16.733951 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"814721a261b533a9a86b78e69965b4a15f6541f65a800373fd10ca72e3e5b7d2\": container with ID starting with 814721a261b533a9a86b78e69965b4a15f6541f65a800373fd10ca72e3e5b7d2 not found: ID does not exist" containerID="814721a261b533a9a86b78e69965b4a15f6541f65a800373fd10ca72e3e5b7d2" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.734081 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"814721a261b533a9a86b78e69965b4a15f6541f65a800373fd10ca72e3e5b7d2"} err="failed to get container status \"814721a261b533a9a86b78e69965b4a15f6541f65a800373fd10ca72e3e5b7d2\": rpc error: code = NotFound desc = could not find container \"814721a261b533a9a86b78e69965b4a15f6541f65a800373fd10ca72e3e5b7d2\": container with ID starting with 814721a261b533a9a86b78e69965b4a15f6541f65a800373fd10ca72e3e5b7d2 not found: ID does not exist" Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.738175 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv"] Dec 08 19:34:16 crc kubenswrapper[5118]: I1208 19:34:16.748449 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-575sv"] Dec 08 19:34:17 crc kubenswrapper[5118]: I1208 19:34:17.697341 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" event={"ID":"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6","Type":"ContainerStarted","Data":"de3186932b794a0a34d925cd7c29953dbac12c7779822fc1b69d79e3fd24fb5b"} Dec 08 19:34:17 crc kubenswrapper[5118]: I1208 19:34:17.698459 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" event={"ID":"a054f261-c62f-4b27-918f-ea4ff6432d66","Type":"ContainerStarted","Data":"029ff1419a5e1d1fd04562666059ffb84722ef2652f9b86b461d11ca19b46d7e"} Dec 08 19:34:17 crc kubenswrapper[5118]: I1208 19:34:17.698622 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:17 crc kubenswrapper[5118]: I1208 19:34:17.699506 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:17 crc kubenswrapper[5118]: I1208 19:34:17.702188 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-kgtcw" Dec 08 19:34:17 crc kubenswrapper[5118]: I1208 19:34:17.705106 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:17 crc kubenswrapper[5118]: I1208 19:34:17.705835 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:17 crc kubenswrapper[5118]: I1208 19:34:17.726861 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" podStartSLOduration=2.726832505 podStartE2EDuration="2.726832505s" podCreationTimestamp="2025-12-08 19:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:34:17.719536658 +0000 UTC m=+310.012382135" watchObservedRunningTime="2025-12-08 19:34:17.726832505 +0000 UTC m=+310.019677982" Dec 08 19:34:17 crc kubenswrapper[5118]: I1208 19:34:17.778673 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" podStartSLOduration=2.778648885 podStartE2EDuration="2.778648885s" podCreationTimestamp="2025-12-08 19:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:34:17.777065343 +0000 UTC m=+310.069910800" watchObservedRunningTime="2025-12-08 19:34:17.778648885 +0000 UTC m=+310.071494342" Dec 08 19:34:18 crc kubenswrapper[5118]: I1208 19:34:18.104505 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88131373-e414-436f-83e1-9d4aa4b55f62" path="/var/lib/kubelet/pods/88131373-e414-436f-83e1-9d4aa4b55f62/volumes" Dec 08 19:34:18 crc kubenswrapper[5118]: I1208 19:34:18.105106 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5ad6856-ba98-4f91-b102-7e41020e2ecf" path="/var/lib/kubelet/pods/d5ad6856-ba98-4f91-b102-7e41020e2ecf/volumes" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.119471 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f88f547c4-fwlzv"] Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.120330 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" podUID="a054f261-c62f-4b27-918f-ea4ff6432d66" containerName="controller-manager" containerID="cri-o://029ff1419a5e1d1fd04562666059ffb84722ef2652f9b86b461d11ca19b46d7e" gracePeriod=30 Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.135217 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv"] Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.135468 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" podUID="e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6" containerName="route-controller-manager" containerID="cri-o://de3186932b794a0a34d925cd7c29953dbac12c7779822fc1b69d79e3fd24fb5b" gracePeriod=30 Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.598760 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.635325 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-serving-cert\") pod \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\" (UID: \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\") " Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.635373 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-client-ca\") pod \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\" (UID: \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\") " Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.635425 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-config\") pod \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\" (UID: \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\") " Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.635453 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4w9q\" (UniqueName: \"kubernetes.io/projected/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-kube-api-access-f4w9q\") pod \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\" (UID: \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\") " Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.635484 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-tmp\") pod \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\" (UID: \"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6\") " Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.636141 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-tmp" (OuterVolumeSpecName: "tmp") pod "e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6" (UID: "e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.651340 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6" (UID: "e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.651421 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-config" (OuterVolumeSpecName: "config") pod "e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6" (UID: "e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.651485 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-client-ca" (OuterVolumeSpecName: "client-ca") pod "e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6" (UID: "e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.655991 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-kube-api-access-f4w9q" (OuterVolumeSpecName: "kube-api-access-f4w9q") pod "e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6" (UID: "e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6"). InnerVolumeSpecName "kube-api-access-f4w9q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.663016 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q"] Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.668177 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6" containerName="route-controller-manager" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.668458 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6" containerName="route-controller-manager" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.668668 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6" containerName="route-controller-manager" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.675426 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.678460 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q"] Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.736808 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/014b2232-17fa-431f-a2d2-8c174d6dacd1-tmp\") pod \"route-controller-manager-b4bd789c6-rfg5q\" (UID: \"014b2232-17fa-431f-a2d2-8c174d6dacd1\") " pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.736872 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz7d2\" (UniqueName: \"kubernetes.io/projected/014b2232-17fa-431f-a2d2-8c174d6dacd1-kube-api-access-gz7d2\") pod \"route-controller-manager-b4bd789c6-rfg5q\" (UID: \"014b2232-17fa-431f-a2d2-8c174d6dacd1\") " pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.736911 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/014b2232-17fa-431f-a2d2-8c174d6dacd1-serving-cert\") pod \"route-controller-manager-b4bd789c6-rfg5q\" (UID: \"014b2232-17fa-431f-a2d2-8c174d6dacd1\") " pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.736949 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/014b2232-17fa-431f-a2d2-8c174d6dacd1-config\") pod \"route-controller-manager-b4bd789c6-rfg5q\" (UID: \"014b2232-17fa-431f-a2d2-8c174d6dacd1\") " pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.737117 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/014b2232-17fa-431f-a2d2-8c174d6dacd1-client-ca\") pod \"route-controller-manager-b4bd789c6-rfg5q\" (UID: \"014b2232-17fa-431f-a2d2-8c174d6dacd1\") " pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.737354 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.737378 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.737391 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.737402 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.737412 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f4w9q\" (UniqueName: \"kubernetes.io/projected/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6-kube-api-access-f4w9q\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.838343 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/014b2232-17fa-431f-a2d2-8c174d6dacd1-tmp\") pod \"route-controller-manager-b4bd789c6-rfg5q\" (UID: \"014b2232-17fa-431f-a2d2-8c174d6dacd1\") " pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.838434 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gz7d2\" (UniqueName: \"kubernetes.io/projected/014b2232-17fa-431f-a2d2-8c174d6dacd1-kube-api-access-gz7d2\") pod \"route-controller-manager-b4bd789c6-rfg5q\" (UID: \"014b2232-17fa-431f-a2d2-8c174d6dacd1\") " pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.838502 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/014b2232-17fa-431f-a2d2-8c174d6dacd1-serving-cert\") pod \"route-controller-manager-b4bd789c6-rfg5q\" (UID: \"014b2232-17fa-431f-a2d2-8c174d6dacd1\") " pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.838544 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/014b2232-17fa-431f-a2d2-8c174d6dacd1-config\") pod \"route-controller-manager-b4bd789c6-rfg5q\" (UID: \"014b2232-17fa-431f-a2d2-8c174d6dacd1\") " pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.838633 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/014b2232-17fa-431f-a2d2-8c174d6dacd1-client-ca\") pod \"route-controller-manager-b4bd789c6-rfg5q\" (UID: \"014b2232-17fa-431f-a2d2-8c174d6dacd1\") " pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.838923 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/014b2232-17fa-431f-a2d2-8c174d6dacd1-tmp\") pod \"route-controller-manager-b4bd789c6-rfg5q\" (UID: \"014b2232-17fa-431f-a2d2-8c174d6dacd1\") " pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.839775 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/014b2232-17fa-431f-a2d2-8c174d6dacd1-config\") pod \"route-controller-manager-b4bd789c6-rfg5q\" (UID: \"014b2232-17fa-431f-a2d2-8c174d6dacd1\") " pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.840098 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/014b2232-17fa-431f-a2d2-8c174d6dacd1-client-ca\") pod \"route-controller-manager-b4bd789c6-rfg5q\" (UID: \"014b2232-17fa-431f-a2d2-8c174d6dacd1\") " pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.842952 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/014b2232-17fa-431f-a2d2-8c174d6dacd1-serving-cert\") pod \"route-controller-manager-b4bd789c6-rfg5q\" (UID: \"014b2232-17fa-431f-a2d2-8c174d6dacd1\") " pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.845828 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.859586 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz7d2\" (UniqueName: \"kubernetes.io/projected/014b2232-17fa-431f-a2d2-8c174d6dacd1-kube-api-access-gz7d2\") pod \"route-controller-manager-b4bd789c6-rfg5q\" (UID: \"014b2232-17fa-431f-a2d2-8c174d6dacd1\") " pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.862605 5118 generic.go:358] "Generic (PLEG): container finished" podID="e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6" containerID="de3186932b794a0a34d925cd7c29953dbac12c7779822fc1b69d79e3fd24fb5b" exitCode=0 Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.862717 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.862739 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" event={"ID":"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6","Type":"ContainerDied","Data":"de3186932b794a0a34d925cd7c29953dbac12c7779822fc1b69d79e3fd24fb5b"} Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.862793 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv" event={"ID":"e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6","Type":"ContainerDied","Data":"4ec350facc54e4109d6204e1c7982be091e8713fff17868cd1dc45d820eb8184"} Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.862816 5118 scope.go:117] "RemoveContainer" containerID="de3186932b794a0a34d925cd7c29953dbac12c7779822fc1b69d79e3fd24fb5b" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.864717 5118 generic.go:358] "Generic (PLEG): container finished" podID="a054f261-c62f-4b27-918f-ea4ff6432d66" containerID="029ff1419a5e1d1fd04562666059ffb84722ef2652f9b86b461d11ca19b46d7e" exitCode=0 Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.864754 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" event={"ID":"a054f261-c62f-4b27-918f-ea4ff6432d66","Type":"ContainerDied","Data":"029ff1419a5e1d1fd04562666059ffb84722ef2652f9b86b461d11ca19b46d7e"} Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.864784 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" event={"ID":"a054f261-c62f-4b27-918f-ea4ff6432d66","Type":"ContainerDied","Data":"307807f572826dfd3060753311e7d2798eb0317be15891eb1e3757cfde59f1dc"} Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.864828 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f88f547c4-fwlzv" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.876545 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d996465fb-cb5bf"] Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.877112 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a054f261-c62f-4b27-918f-ea4ff6432d66" containerName="controller-manager" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.877135 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="a054f261-c62f-4b27-918f-ea4ff6432d66" containerName="controller-manager" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.877221 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="a054f261-c62f-4b27-918f-ea4ff6432d66" containerName="controller-manager" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.882789 5118 scope.go:117] "RemoveContainer" containerID="de3186932b794a0a34d925cd7c29953dbac12c7779822fc1b69d79e3fd24fb5b" Dec 08 19:34:46 crc kubenswrapper[5118]: E1208 19:34:46.883785 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de3186932b794a0a34d925cd7c29953dbac12c7779822fc1b69d79e3fd24fb5b\": container with ID starting with de3186932b794a0a34d925cd7c29953dbac12c7779822fc1b69d79e3fd24fb5b not found: ID does not exist" containerID="de3186932b794a0a34d925cd7c29953dbac12c7779822fc1b69d79e3fd24fb5b" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.883817 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de3186932b794a0a34d925cd7c29953dbac12c7779822fc1b69d79e3fd24fb5b"} err="failed to get container status \"de3186932b794a0a34d925cd7c29953dbac12c7779822fc1b69d79e3fd24fb5b\": rpc error: code = NotFound desc = could not find container \"de3186932b794a0a34d925cd7c29953dbac12c7779822fc1b69d79e3fd24fb5b\": container with ID starting with de3186932b794a0a34d925cd7c29953dbac12c7779822fc1b69d79e3fd24fb5b not found: ID does not exist" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.883839 5118 scope.go:117] "RemoveContainer" containerID="029ff1419a5e1d1fd04562666059ffb84722ef2652f9b86b461d11ca19b46d7e" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.886972 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.891609 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d996465fb-cb5bf"] Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.897176 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv"] Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.900050 5118 scope.go:117] "RemoveContainer" containerID="029ff1419a5e1d1fd04562666059ffb84722ef2652f9b86b461d11ca19b46d7e" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.900723 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6785fbd6d-xzvhv"] Dec 08 19:34:46 crc kubenswrapper[5118]: E1208 19:34:46.901874 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"029ff1419a5e1d1fd04562666059ffb84722ef2652f9b86b461d11ca19b46d7e\": container with ID starting with 029ff1419a5e1d1fd04562666059ffb84722ef2652f9b86b461d11ca19b46d7e not found: ID does not exist" containerID="029ff1419a5e1d1fd04562666059ffb84722ef2652f9b86b461d11ca19b46d7e" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.901921 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"029ff1419a5e1d1fd04562666059ffb84722ef2652f9b86b461d11ca19b46d7e"} err="failed to get container status \"029ff1419a5e1d1fd04562666059ffb84722ef2652f9b86b461d11ca19b46d7e\": rpc error: code = NotFound desc = could not find container \"029ff1419a5e1d1fd04562666059ffb84722ef2652f9b86b461d11ca19b46d7e\": container with ID starting with 029ff1419a5e1d1fd04562666059ffb84722ef2652f9b86b461d11ca19b46d7e not found: ID does not exist" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.940418 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a054f261-c62f-4b27-918f-ea4ff6432d66-tmp\") pod \"a054f261-c62f-4b27-918f-ea4ff6432d66\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.940494 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a054f261-c62f-4b27-918f-ea4ff6432d66-serving-cert\") pod \"a054f261-c62f-4b27-918f-ea4ff6432d66\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.940526 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9jfl\" (UniqueName: \"kubernetes.io/projected/a054f261-c62f-4b27-918f-ea4ff6432d66-kube-api-access-p9jfl\") pod \"a054f261-c62f-4b27-918f-ea4ff6432d66\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.940660 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a054f261-c62f-4b27-918f-ea4ff6432d66-config\") pod \"a054f261-c62f-4b27-918f-ea4ff6432d66\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.940766 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a054f261-c62f-4b27-918f-ea4ff6432d66-client-ca\") pod \"a054f261-c62f-4b27-918f-ea4ff6432d66\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.940797 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a054f261-c62f-4b27-918f-ea4ff6432d66-proxy-ca-bundles\") pod \"a054f261-c62f-4b27-918f-ea4ff6432d66\" (UID: \"a054f261-c62f-4b27-918f-ea4ff6432d66\") " Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.940916 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a84aa89b-ef66-4cc3-bf88-2625791e70a4-config\") pod \"controller-manager-5d996465fb-cb5bf\" (UID: \"a84aa89b-ef66-4cc3-bf88-2625791e70a4\") " pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.940940 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a84aa89b-ef66-4cc3-bf88-2625791e70a4-client-ca\") pod \"controller-manager-5d996465fb-cb5bf\" (UID: \"a84aa89b-ef66-4cc3-bf88-2625791e70a4\") " pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.940973 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mvmt\" (UniqueName: \"kubernetes.io/projected/a84aa89b-ef66-4cc3-bf88-2625791e70a4-kube-api-access-9mvmt\") pod \"controller-manager-5d996465fb-cb5bf\" (UID: \"a84aa89b-ef66-4cc3-bf88-2625791e70a4\") " pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.941029 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a84aa89b-ef66-4cc3-bf88-2625791e70a4-serving-cert\") pod \"controller-manager-5d996465fb-cb5bf\" (UID: \"a84aa89b-ef66-4cc3-bf88-2625791e70a4\") " pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.941070 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a84aa89b-ef66-4cc3-bf88-2625791e70a4-proxy-ca-bundles\") pod \"controller-manager-5d996465fb-cb5bf\" (UID: \"a84aa89b-ef66-4cc3-bf88-2625791e70a4\") " pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.941096 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a84aa89b-ef66-4cc3-bf88-2625791e70a4-tmp\") pod \"controller-manager-5d996465fb-cb5bf\" (UID: \"a84aa89b-ef66-4cc3-bf88-2625791e70a4\") " pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.941463 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a054f261-c62f-4b27-918f-ea4ff6432d66-client-ca" (OuterVolumeSpecName: "client-ca") pod "a054f261-c62f-4b27-918f-ea4ff6432d66" (UID: "a054f261-c62f-4b27-918f-ea4ff6432d66"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.941506 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a054f261-c62f-4b27-918f-ea4ff6432d66-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a054f261-c62f-4b27-918f-ea4ff6432d66" (UID: "a054f261-c62f-4b27-918f-ea4ff6432d66"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.941568 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a054f261-c62f-4b27-918f-ea4ff6432d66-config" (OuterVolumeSpecName: "config") pod "a054f261-c62f-4b27-918f-ea4ff6432d66" (UID: "a054f261-c62f-4b27-918f-ea4ff6432d66"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.943169 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a054f261-c62f-4b27-918f-ea4ff6432d66-tmp" (OuterVolumeSpecName: "tmp") pod "a054f261-c62f-4b27-918f-ea4ff6432d66" (UID: "a054f261-c62f-4b27-918f-ea4ff6432d66"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.944460 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a054f261-c62f-4b27-918f-ea4ff6432d66-kube-api-access-p9jfl" (OuterVolumeSpecName: "kube-api-access-p9jfl") pod "a054f261-c62f-4b27-918f-ea4ff6432d66" (UID: "a054f261-c62f-4b27-918f-ea4ff6432d66"). InnerVolumeSpecName "kube-api-access-p9jfl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.947553 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a054f261-c62f-4b27-918f-ea4ff6432d66-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a054f261-c62f-4b27-918f-ea4ff6432d66" (UID: "a054f261-c62f-4b27-918f-ea4ff6432d66"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:34:46 crc kubenswrapper[5118]: I1208 19:34:46.999560 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.042440 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a84aa89b-ef66-4cc3-bf88-2625791e70a4-serving-cert\") pod \"controller-manager-5d996465fb-cb5bf\" (UID: \"a84aa89b-ef66-4cc3-bf88-2625791e70a4\") " pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.042780 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a84aa89b-ef66-4cc3-bf88-2625791e70a4-proxy-ca-bundles\") pod \"controller-manager-5d996465fb-cb5bf\" (UID: \"a84aa89b-ef66-4cc3-bf88-2625791e70a4\") " pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.042920 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a84aa89b-ef66-4cc3-bf88-2625791e70a4-tmp\") pod \"controller-manager-5d996465fb-cb5bf\" (UID: \"a84aa89b-ef66-4cc3-bf88-2625791e70a4\") " pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.043538 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a84aa89b-ef66-4cc3-bf88-2625791e70a4-tmp\") pod \"controller-manager-5d996465fb-cb5bf\" (UID: \"a84aa89b-ef66-4cc3-bf88-2625791e70a4\") " pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.043833 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a84aa89b-ef66-4cc3-bf88-2625791e70a4-proxy-ca-bundles\") pod \"controller-manager-5d996465fb-cb5bf\" (UID: \"a84aa89b-ef66-4cc3-bf88-2625791e70a4\") " pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.045032 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a84aa89b-ef66-4cc3-bf88-2625791e70a4-config\") pod \"controller-manager-5d996465fb-cb5bf\" (UID: \"a84aa89b-ef66-4cc3-bf88-2625791e70a4\") " pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.043680 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a84aa89b-ef66-4cc3-bf88-2625791e70a4-config\") pod \"controller-manager-5d996465fb-cb5bf\" (UID: \"a84aa89b-ef66-4cc3-bf88-2625791e70a4\") " pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.045306 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a84aa89b-ef66-4cc3-bf88-2625791e70a4-client-ca\") pod \"controller-manager-5d996465fb-cb5bf\" (UID: \"a84aa89b-ef66-4cc3-bf88-2625791e70a4\") " pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.046255 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a84aa89b-ef66-4cc3-bf88-2625791e70a4-serving-cert\") pod \"controller-manager-5d996465fb-cb5bf\" (UID: \"a84aa89b-ef66-4cc3-bf88-2625791e70a4\") " pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.046157 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a84aa89b-ef66-4cc3-bf88-2625791e70a4-client-ca\") pod \"controller-manager-5d996465fb-cb5bf\" (UID: \"a84aa89b-ef66-4cc3-bf88-2625791e70a4\") " pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.046234 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9mvmt\" (UniqueName: \"kubernetes.io/projected/a84aa89b-ef66-4cc3-bf88-2625791e70a4-kube-api-access-9mvmt\") pod \"controller-manager-5d996465fb-cb5bf\" (UID: \"a84aa89b-ef66-4cc3-bf88-2625791e70a4\") " pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.046639 5118 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a054f261-c62f-4b27-918f-ea4ff6432d66-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.046775 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a054f261-c62f-4b27-918f-ea4ff6432d66-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.046939 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a054f261-c62f-4b27-918f-ea4ff6432d66-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.047027 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p9jfl\" (UniqueName: \"kubernetes.io/projected/a054f261-c62f-4b27-918f-ea4ff6432d66-kube-api-access-p9jfl\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.047118 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a054f261-c62f-4b27-918f-ea4ff6432d66-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.047199 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a054f261-c62f-4b27-918f-ea4ff6432d66-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.062261 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mvmt\" (UniqueName: \"kubernetes.io/projected/a84aa89b-ef66-4cc3-bf88-2625791e70a4-kube-api-access-9mvmt\") pod \"controller-manager-5d996465fb-cb5bf\" (UID: \"a84aa89b-ef66-4cc3-bf88-2625791e70a4\") " pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.196116 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f88f547c4-fwlzv"] Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.199334 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5f88f547c4-fwlzv"] Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.204148 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.380105 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d996465fb-cb5bf"] Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.402874 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q"] Dec 08 19:34:47 crc kubenswrapper[5118]: W1208 19:34:47.404619 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod014b2232_17fa_431f_a2d2_8c174d6dacd1.slice/crio-72ffbfa3c3dbf4d431c735288701fe62fa208cbe3335b883281cb22f7513134f WatchSource:0}: Error finding container 72ffbfa3c3dbf4d431c735288701fe62fa208cbe3335b883281cb22f7513134f: Status 404 returned error can't find the container with id 72ffbfa3c3dbf4d431c735288701fe62fa208cbe3335b883281cb22f7513134f Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.871181 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" event={"ID":"014b2232-17fa-431f-a2d2-8c174d6dacd1","Type":"ContainerStarted","Data":"9c872e2daf743ef009ad51290108dc9c14b310d09b1e2f72e4a3c9e2fbf53200"} Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.871506 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.871519 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" event={"ID":"014b2232-17fa-431f-a2d2-8c174d6dacd1","Type":"ContainerStarted","Data":"72ffbfa3c3dbf4d431c735288701fe62fa208cbe3335b883281cb22f7513134f"} Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.873712 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" event={"ID":"a84aa89b-ef66-4cc3-bf88-2625791e70a4","Type":"ContainerStarted","Data":"41647cde430b73cc4f54c6d9a2d85565142368f45ee77a2a5cc8a472d41cf26f"} Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.873876 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" event={"ID":"a84aa89b-ef66-4cc3-bf88-2625791e70a4","Type":"ContainerStarted","Data":"8f418d2aafb165dd03c14592996f8f065b9ea5a54e34726b6b79a27eafdf10b6"} Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.874286 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.886556 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" podStartSLOduration=1.886532654 podStartE2EDuration="1.886532654s" podCreationTimestamp="2025-12-08 19:34:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:34:47.884793206 +0000 UTC m=+340.177638683" watchObservedRunningTime="2025-12-08 19:34:47.886532654 +0000 UTC m=+340.179378121" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.904003 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" podStartSLOduration=1.9039822069999999 podStartE2EDuration="1.903982207s" podCreationTimestamp="2025-12-08 19:34:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:34:47.903900134 +0000 UTC m=+340.196745601" watchObservedRunningTime="2025-12-08 19:34:47.903982207 +0000 UTC m=+340.196827674" Dec 08 19:34:47 crc kubenswrapper[5118]: I1208 19:34:47.937068 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-b4bd789c6-rfg5q" Dec 08 19:34:48 crc kubenswrapper[5118]: I1208 19:34:48.106633 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a054f261-c62f-4b27-918f-ea4ff6432d66" path="/var/lib/kubelet/pods/a054f261-c62f-4b27-918f-ea4ff6432d66/volumes" Dec 08 19:34:48 crc kubenswrapper[5118]: I1208 19:34:48.107206 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6" path="/var/lib/kubelet/pods/e6a95f5e-c69d-47bf-912b-2c0e1ef1bfe6/volumes" Dec 08 19:34:48 crc kubenswrapper[5118]: I1208 19:34:48.392248 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d996465fb-cb5bf" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.311956 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nn7wp"] Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.352169 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nn7wp"] Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.352323 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nn7wp" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.354798 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.358357 5118 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.487235 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bcdc\" (UniqueName: \"kubernetes.io/projected/cb359b2d-2e73-4aea-b4fe-4510ff35e056-kube-api-access-5bcdc\") pod \"certified-operators-nn7wp\" (UID: \"cb359b2d-2e73-4aea-b4fe-4510ff35e056\") " pod="openshift-marketplace/certified-operators-nn7wp" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.487767 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb359b2d-2e73-4aea-b4fe-4510ff35e056-catalog-content\") pod \"certified-operators-nn7wp\" (UID: \"cb359b2d-2e73-4aea-b4fe-4510ff35e056\") " pod="openshift-marketplace/certified-operators-nn7wp" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.487909 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb359b2d-2e73-4aea-b4fe-4510ff35e056-utilities\") pod \"certified-operators-nn7wp\" (UID: \"cb359b2d-2e73-4aea-b4fe-4510ff35e056\") " pod="openshift-marketplace/certified-operators-nn7wp" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.500419 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vsd29"] Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.513466 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsd29" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.516642 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.522512 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vsd29"] Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.589007 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb359b2d-2e73-4aea-b4fe-4510ff35e056-utilities\") pod \"certified-operators-nn7wp\" (UID: \"cb359b2d-2e73-4aea-b4fe-4510ff35e056\") " pod="openshift-marketplace/certified-operators-nn7wp" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.589077 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5bcdc\" (UniqueName: \"kubernetes.io/projected/cb359b2d-2e73-4aea-b4fe-4510ff35e056-kube-api-access-5bcdc\") pod \"certified-operators-nn7wp\" (UID: \"cb359b2d-2e73-4aea-b4fe-4510ff35e056\") " pod="openshift-marketplace/certified-operators-nn7wp" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.589101 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb359b2d-2e73-4aea-b4fe-4510ff35e056-catalog-content\") pod \"certified-operators-nn7wp\" (UID: \"cb359b2d-2e73-4aea-b4fe-4510ff35e056\") " pod="openshift-marketplace/certified-operators-nn7wp" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.589674 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb359b2d-2e73-4aea-b4fe-4510ff35e056-catalog-content\") pod \"certified-operators-nn7wp\" (UID: \"cb359b2d-2e73-4aea-b4fe-4510ff35e056\") " pod="openshift-marketplace/certified-operators-nn7wp" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.589717 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb359b2d-2e73-4aea-b4fe-4510ff35e056-utilities\") pod \"certified-operators-nn7wp\" (UID: \"cb359b2d-2e73-4aea-b4fe-4510ff35e056\") " pod="openshift-marketplace/certified-operators-nn7wp" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.614802 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bcdc\" (UniqueName: \"kubernetes.io/projected/cb359b2d-2e73-4aea-b4fe-4510ff35e056-kube-api-access-5bcdc\") pod \"certified-operators-nn7wp\" (UID: \"cb359b2d-2e73-4aea-b4fe-4510ff35e056\") " pod="openshift-marketplace/certified-operators-nn7wp" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.668268 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nn7wp" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.690522 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbdmw\" (UniqueName: \"kubernetes.io/projected/f8b6cfe8-0ceb-4d66-945e-5eb95641a779-kube-api-access-kbdmw\") pod \"community-operators-vsd29\" (UID: \"f8b6cfe8-0ceb-4d66-945e-5eb95641a779\") " pod="openshift-marketplace/community-operators-vsd29" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.690576 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b6cfe8-0ceb-4d66-945e-5eb95641a779-utilities\") pod \"community-operators-vsd29\" (UID: \"f8b6cfe8-0ceb-4d66-945e-5eb95641a779\") " pod="openshift-marketplace/community-operators-vsd29" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.690645 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b6cfe8-0ceb-4d66-945e-5eb95641a779-catalog-content\") pod \"community-operators-vsd29\" (UID: \"f8b6cfe8-0ceb-4d66-945e-5eb95641a779\") " pod="openshift-marketplace/community-operators-vsd29" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.791942 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b6cfe8-0ceb-4d66-945e-5eb95641a779-utilities\") pod \"community-operators-vsd29\" (UID: \"f8b6cfe8-0ceb-4d66-945e-5eb95641a779\") " pod="openshift-marketplace/community-operators-vsd29" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.792302 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b6cfe8-0ceb-4d66-945e-5eb95641a779-catalog-content\") pod \"community-operators-vsd29\" (UID: \"f8b6cfe8-0ceb-4d66-945e-5eb95641a779\") " pod="openshift-marketplace/community-operators-vsd29" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.792395 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kbdmw\" (UniqueName: \"kubernetes.io/projected/f8b6cfe8-0ceb-4d66-945e-5eb95641a779-kube-api-access-kbdmw\") pod \"community-operators-vsd29\" (UID: \"f8b6cfe8-0ceb-4d66-945e-5eb95641a779\") " pod="openshift-marketplace/community-operators-vsd29" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.792458 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b6cfe8-0ceb-4d66-945e-5eb95641a779-utilities\") pod \"community-operators-vsd29\" (UID: \"f8b6cfe8-0ceb-4d66-945e-5eb95641a779\") " pod="openshift-marketplace/community-operators-vsd29" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.792672 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b6cfe8-0ceb-4d66-945e-5eb95641a779-catalog-content\") pod \"community-operators-vsd29\" (UID: \"f8b6cfe8-0ceb-4d66-945e-5eb95641a779\") " pod="openshift-marketplace/community-operators-vsd29" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.811398 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbdmw\" (UniqueName: \"kubernetes.io/projected/f8b6cfe8-0ceb-4d66-945e-5eb95641a779-kube-api-access-kbdmw\") pod \"community-operators-vsd29\" (UID: \"f8b6cfe8-0ceb-4d66-945e-5eb95641a779\") " pod="openshift-marketplace/community-operators-vsd29" Dec 08 19:35:04 crc kubenswrapper[5118]: I1208 19:35:04.834036 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsd29" Dec 08 19:35:05 crc kubenswrapper[5118]: I1208 19:35:05.049345 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nn7wp"] Dec 08 19:35:05 crc kubenswrapper[5118]: W1208 19:35:05.053728 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb359b2d_2e73_4aea_b4fe_4510ff35e056.slice/crio-0245931b76fd8e85a5f0287da42cbf0affe15934e13394f444985a9cfe81fa32 WatchSource:0}: Error finding container 0245931b76fd8e85a5f0287da42cbf0affe15934e13394f444985a9cfe81fa32: Status 404 returned error can't find the container with id 0245931b76fd8e85a5f0287da42cbf0affe15934e13394f444985a9cfe81fa32 Dec 08 19:35:05 crc kubenswrapper[5118]: I1208 19:35:05.212725 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vsd29"] Dec 08 19:35:05 crc kubenswrapper[5118]: W1208 19:35:05.248222 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8b6cfe8_0ceb_4d66_945e_5eb95641a779.slice/crio-ef8a3501de7ecefea1a58090ea0381fd2ab8c2837208b2eff5e0a46bf3b957c0 WatchSource:0}: Error finding container ef8a3501de7ecefea1a58090ea0381fd2ab8c2837208b2eff5e0a46bf3b957c0: Status 404 returned error can't find the container with id ef8a3501de7ecefea1a58090ea0381fd2ab8c2837208b2eff5e0a46bf3b957c0 Dec 08 19:35:05 crc kubenswrapper[5118]: I1208 19:35:05.973655 5118 generic.go:358] "Generic (PLEG): container finished" podID="f8b6cfe8-0ceb-4d66-945e-5eb95641a779" containerID="687ad92008d1b4835d42ecec6e48aa021f24c9dfd649d0a6fdecb3b4bc0cee57" exitCode=0 Dec 08 19:35:05 crc kubenswrapper[5118]: I1208 19:35:05.973839 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsd29" event={"ID":"f8b6cfe8-0ceb-4d66-945e-5eb95641a779","Type":"ContainerDied","Data":"687ad92008d1b4835d42ecec6e48aa021f24c9dfd649d0a6fdecb3b4bc0cee57"} Dec 08 19:35:05 crc kubenswrapper[5118]: I1208 19:35:05.973939 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsd29" event={"ID":"f8b6cfe8-0ceb-4d66-945e-5eb95641a779","Type":"ContainerStarted","Data":"ef8a3501de7ecefea1a58090ea0381fd2ab8c2837208b2eff5e0a46bf3b957c0"} Dec 08 19:35:05 crc kubenswrapper[5118]: I1208 19:35:05.975697 5118 generic.go:358] "Generic (PLEG): container finished" podID="cb359b2d-2e73-4aea-b4fe-4510ff35e056" containerID="5b4dee49c2b8460b0b7cba4d9d419f257147981b7f8db64ea9790ec1d1efabc7" exitCode=0 Dec 08 19:35:05 crc kubenswrapper[5118]: I1208 19:35:05.976406 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nn7wp" event={"ID":"cb359b2d-2e73-4aea-b4fe-4510ff35e056","Type":"ContainerDied","Data":"5b4dee49c2b8460b0b7cba4d9d419f257147981b7f8db64ea9790ec1d1efabc7"} Dec 08 19:35:05 crc kubenswrapper[5118]: I1208 19:35:05.976458 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nn7wp" event={"ID":"cb359b2d-2e73-4aea-b4fe-4510ff35e056","Type":"ContainerStarted","Data":"0245931b76fd8e85a5f0287da42cbf0affe15934e13394f444985a9cfe81fa32"} Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.694211 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2p8vn"] Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.748096 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2p8vn"] Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.748244 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2p8vn" Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.750667 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.850177 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0364b29-2456-4ccd-8b62-0374c2c8959c-utilities\") pod \"redhat-marketplace-2p8vn\" (UID: \"a0364b29-2456-4ccd-8b62-0374c2c8959c\") " pod="openshift-marketplace/redhat-marketplace-2p8vn" Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.850260 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzgsn\" (UniqueName: \"kubernetes.io/projected/a0364b29-2456-4ccd-8b62-0374c2c8959c-kube-api-access-gzgsn\") pod \"redhat-marketplace-2p8vn\" (UID: \"a0364b29-2456-4ccd-8b62-0374c2c8959c\") " pod="openshift-marketplace/redhat-marketplace-2p8vn" Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.850332 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0364b29-2456-4ccd-8b62-0374c2c8959c-catalog-content\") pod \"redhat-marketplace-2p8vn\" (UID: \"a0364b29-2456-4ccd-8b62-0374c2c8959c\") " pod="openshift-marketplace/redhat-marketplace-2p8vn" Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.896400 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6cgbn"] Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.903743 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6cgbn" Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.905383 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6cgbn"] Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.907128 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.951573 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0364b29-2456-4ccd-8b62-0374c2c8959c-catalog-content\") pod \"redhat-marketplace-2p8vn\" (UID: \"a0364b29-2456-4ccd-8b62-0374c2c8959c\") " pod="openshift-marketplace/redhat-marketplace-2p8vn" Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.951618 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0364b29-2456-4ccd-8b62-0374c2c8959c-utilities\") pod \"redhat-marketplace-2p8vn\" (UID: \"a0364b29-2456-4ccd-8b62-0374c2c8959c\") " pod="openshift-marketplace/redhat-marketplace-2p8vn" Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.951645 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wb6d\" (UniqueName: \"kubernetes.io/projected/5cd47479-86df-4175-ac1c-96ae73b2db76-kube-api-access-7wb6d\") pod \"redhat-operators-6cgbn\" (UID: \"5cd47479-86df-4175-ac1c-96ae73b2db76\") " pod="openshift-marketplace/redhat-operators-6cgbn" Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.951746 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gzgsn\" (UniqueName: \"kubernetes.io/projected/a0364b29-2456-4ccd-8b62-0374c2c8959c-kube-api-access-gzgsn\") pod \"redhat-marketplace-2p8vn\" (UID: \"a0364b29-2456-4ccd-8b62-0374c2c8959c\") " pod="openshift-marketplace/redhat-marketplace-2p8vn" Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.951771 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd47479-86df-4175-ac1c-96ae73b2db76-utilities\") pod \"redhat-operators-6cgbn\" (UID: \"5cd47479-86df-4175-ac1c-96ae73b2db76\") " pod="openshift-marketplace/redhat-operators-6cgbn" Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.951794 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd47479-86df-4175-ac1c-96ae73b2db76-catalog-content\") pod \"redhat-operators-6cgbn\" (UID: \"5cd47479-86df-4175-ac1c-96ae73b2db76\") " pod="openshift-marketplace/redhat-operators-6cgbn" Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.952198 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0364b29-2456-4ccd-8b62-0374c2c8959c-catalog-content\") pod \"redhat-marketplace-2p8vn\" (UID: \"a0364b29-2456-4ccd-8b62-0374c2c8959c\") " pod="openshift-marketplace/redhat-marketplace-2p8vn" Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.952513 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0364b29-2456-4ccd-8b62-0374c2c8959c-utilities\") pod \"redhat-marketplace-2p8vn\" (UID: \"a0364b29-2456-4ccd-8b62-0374c2c8959c\") " pod="openshift-marketplace/redhat-marketplace-2p8vn" Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.974607 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzgsn\" (UniqueName: \"kubernetes.io/projected/a0364b29-2456-4ccd-8b62-0374c2c8959c-kube-api-access-gzgsn\") pod \"redhat-marketplace-2p8vn\" (UID: \"a0364b29-2456-4ccd-8b62-0374c2c8959c\") " pod="openshift-marketplace/redhat-marketplace-2p8vn" Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.985541 5118 generic.go:358] "Generic (PLEG): container finished" podID="f8b6cfe8-0ceb-4d66-945e-5eb95641a779" containerID="b7843fc2d587eb36f47c9c01e45c82325dfb9bd07ddba6ee24fbd4137c79072c" exitCode=0 Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.985644 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsd29" event={"ID":"f8b6cfe8-0ceb-4d66-945e-5eb95641a779","Type":"ContainerDied","Data":"b7843fc2d587eb36f47c9c01e45c82325dfb9bd07ddba6ee24fbd4137c79072c"} Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.988655 5118 generic.go:358] "Generic (PLEG): container finished" podID="cb359b2d-2e73-4aea-b4fe-4510ff35e056" containerID="afb48d365a81016c88f1376537a9159fdb30f59a9b0d32b4dc60969d3a69afc6" exitCode=0 Dec 08 19:35:06 crc kubenswrapper[5118]: I1208 19:35:06.988772 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nn7wp" event={"ID":"cb359b2d-2e73-4aea-b4fe-4510ff35e056","Type":"ContainerDied","Data":"afb48d365a81016c88f1376537a9159fdb30f59a9b0d32b4dc60969d3a69afc6"} Dec 08 19:35:07 crc kubenswrapper[5118]: I1208 19:35:07.052270 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd47479-86df-4175-ac1c-96ae73b2db76-utilities\") pod \"redhat-operators-6cgbn\" (UID: \"5cd47479-86df-4175-ac1c-96ae73b2db76\") " pod="openshift-marketplace/redhat-operators-6cgbn" Dec 08 19:35:07 crc kubenswrapper[5118]: I1208 19:35:07.052309 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd47479-86df-4175-ac1c-96ae73b2db76-catalog-content\") pod \"redhat-operators-6cgbn\" (UID: \"5cd47479-86df-4175-ac1c-96ae73b2db76\") " pod="openshift-marketplace/redhat-operators-6cgbn" Dec 08 19:35:07 crc kubenswrapper[5118]: I1208 19:35:07.052380 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7wb6d\" (UniqueName: \"kubernetes.io/projected/5cd47479-86df-4175-ac1c-96ae73b2db76-kube-api-access-7wb6d\") pod \"redhat-operators-6cgbn\" (UID: \"5cd47479-86df-4175-ac1c-96ae73b2db76\") " pod="openshift-marketplace/redhat-operators-6cgbn" Dec 08 19:35:07 crc kubenswrapper[5118]: I1208 19:35:07.052934 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd47479-86df-4175-ac1c-96ae73b2db76-catalog-content\") pod \"redhat-operators-6cgbn\" (UID: \"5cd47479-86df-4175-ac1c-96ae73b2db76\") " pod="openshift-marketplace/redhat-operators-6cgbn" Dec 08 19:35:07 crc kubenswrapper[5118]: I1208 19:35:07.053127 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd47479-86df-4175-ac1c-96ae73b2db76-utilities\") pod \"redhat-operators-6cgbn\" (UID: \"5cd47479-86df-4175-ac1c-96ae73b2db76\") " pod="openshift-marketplace/redhat-operators-6cgbn" Dec 08 19:35:07 crc kubenswrapper[5118]: I1208 19:35:07.064993 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2p8vn" Dec 08 19:35:07 crc kubenswrapper[5118]: I1208 19:35:07.068407 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wb6d\" (UniqueName: \"kubernetes.io/projected/5cd47479-86df-4175-ac1c-96ae73b2db76-kube-api-access-7wb6d\") pod \"redhat-operators-6cgbn\" (UID: \"5cd47479-86df-4175-ac1c-96ae73b2db76\") " pod="openshift-marketplace/redhat-operators-6cgbn" Dec 08 19:35:07 crc kubenswrapper[5118]: I1208 19:35:07.319674 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6cgbn" Dec 08 19:35:07 crc kubenswrapper[5118]: I1208 19:35:07.450583 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2p8vn"] Dec 08 19:35:07 crc kubenswrapper[5118]: I1208 19:35:07.731837 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6cgbn"] Dec 08 19:35:07 crc kubenswrapper[5118]: W1208 19:35:07.735010 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cd47479_86df_4175_ac1c_96ae73b2db76.slice/crio-83efc2b9999c7124b58723e6078c8349e52096563ccd2fa304824b89ffe7ed12 WatchSource:0}: Error finding container 83efc2b9999c7124b58723e6078c8349e52096563ccd2fa304824b89ffe7ed12: Status 404 returned error can't find the container with id 83efc2b9999c7124b58723e6078c8349e52096563ccd2fa304824b89ffe7ed12 Dec 08 19:35:07 crc kubenswrapper[5118]: I1208 19:35:07.996283 5118 generic.go:358] "Generic (PLEG): container finished" podID="5cd47479-86df-4175-ac1c-96ae73b2db76" containerID="3aec55e1e91e879490d67d59bdd118e473af00eb4d623b63f089161211742c1a" exitCode=0 Dec 08 19:35:07 crc kubenswrapper[5118]: I1208 19:35:07.996369 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6cgbn" event={"ID":"5cd47479-86df-4175-ac1c-96ae73b2db76","Type":"ContainerDied","Data":"3aec55e1e91e879490d67d59bdd118e473af00eb4d623b63f089161211742c1a"} Dec 08 19:35:07 crc kubenswrapper[5118]: I1208 19:35:07.996400 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6cgbn" event={"ID":"5cd47479-86df-4175-ac1c-96ae73b2db76","Type":"ContainerStarted","Data":"83efc2b9999c7124b58723e6078c8349e52096563ccd2fa304824b89ffe7ed12"} Dec 08 19:35:07 crc kubenswrapper[5118]: I1208 19:35:07.999604 5118 generic.go:358] "Generic (PLEG): container finished" podID="a0364b29-2456-4ccd-8b62-0374c2c8959c" containerID="44da1abb0fe7c8889eae4e01e3bb2f8adfbec07f3c22bcc7d3631bb6be968e07" exitCode=0 Dec 08 19:35:08 crc kubenswrapper[5118]: I1208 19:35:08.000496 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2p8vn" event={"ID":"a0364b29-2456-4ccd-8b62-0374c2c8959c","Type":"ContainerDied","Data":"44da1abb0fe7c8889eae4e01e3bb2f8adfbec07f3c22bcc7d3631bb6be968e07"} Dec 08 19:35:08 crc kubenswrapper[5118]: I1208 19:35:08.000539 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2p8vn" event={"ID":"a0364b29-2456-4ccd-8b62-0374c2c8959c","Type":"ContainerStarted","Data":"36323cb9913bb8bbf213a986f8a7c83a5449bd4135e8f0e0408674976bbe6ae6"} Dec 08 19:35:08 crc kubenswrapper[5118]: I1208 19:35:08.002612 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsd29" event={"ID":"f8b6cfe8-0ceb-4d66-945e-5eb95641a779","Type":"ContainerStarted","Data":"c26b282b4be4da38085aa2ac4c41c8e78977b5f051a1e3c3dc7e87f51dc9375e"} Dec 08 19:35:08 crc kubenswrapper[5118]: I1208 19:35:08.007061 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nn7wp" event={"ID":"cb359b2d-2e73-4aea-b4fe-4510ff35e056","Type":"ContainerStarted","Data":"5a0d6efcb8fc941b455c42e50ab9b14c2184d76619f267917c1fff548144fd99"} Dec 08 19:35:08 crc kubenswrapper[5118]: I1208 19:35:08.044365 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vsd29" podStartSLOduration=3.498709762 podStartE2EDuration="4.044348992s" podCreationTimestamp="2025-12-08 19:35:04 +0000 UTC" firstStartedPulling="2025-12-08 19:35:05.974641803 +0000 UTC m=+358.267487260" lastFinishedPulling="2025-12-08 19:35:06.520281023 +0000 UTC m=+358.813126490" observedRunningTime="2025-12-08 19:35:08.036816114 +0000 UTC m=+360.329661571" watchObservedRunningTime="2025-12-08 19:35:08.044348992 +0000 UTC m=+360.337194449" Dec 08 19:35:08 crc kubenswrapper[5118]: I1208 19:35:08.069663 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nn7wp" podStartSLOduration=3.573021897 podStartE2EDuration="4.069645851s" podCreationTimestamp="2025-12-08 19:35:04 +0000 UTC" firstStartedPulling="2025-12-08 19:35:05.976631368 +0000 UTC m=+358.269476835" lastFinishedPulling="2025-12-08 19:35:06.473255332 +0000 UTC m=+358.766100789" observedRunningTime="2025-12-08 19:35:08.06776083 +0000 UTC m=+360.360606287" watchObservedRunningTime="2025-12-08 19:35:08.069645851 +0000 UTC m=+360.362491308" Dec 08 19:35:09 crc kubenswrapper[5118]: I1208 19:35:09.012908 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6cgbn" event={"ID":"5cd47479-86df-4175-ac1c-96ae73b2db76","Type":"ContainerStarted","Data":"dd5b9b58826cc791a346405ea23f4f70e51894663380f3457549251e904b5c43"} Dec 08 19:35:09 crc kubenswrapper[5118]: I1208 19:35:09.014610 5118 generic.go:358] "Generic (PLEG): container finished" podID="a0364b29-2456-4ccd-8b62-0374c2c8959c" containerID="a92038cd1b672310e2d666a93a48d670a43d02a0a399a6599aadae2bf034650b" exitCode=0 Dec 08 19:35:09 crc kubenswrapper[5118]: I1208 19:35:09.014680 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2p8vn" event={"ID":"a0364b29-2456-4ccd-8b62-0374c2c8959c","Type":"ContainerDied","Data":"a92038cd1b672310e2d666a93a48d670a43d02a0a399a6599aadae2bf034650b"} Dec 08 19:35:10 crc kubenswrapper[5118]: I1208 19:35:10.023469 5118 generic.go:358] "Generic (PLEG): container finished" podID="5cd47479-86df-4175-ac1c-96ae73b2db76" containerID="dd5b9b58826cc791a346405ea23f4f70e51894663380f3457549251e904b5c43" exitCode=0 Dec 08 19:35:10 crc kubenswrapper[5118]: I1208 19:35:10.023510 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6cgbn" event={"ID":"5cd47479-86df-4175-ac1c-96ae73b2db76","Type":"ContainerDied","Data":"dd5b9b58826cc791a346405ea23f4f70e51894663380f3457549251e904b5c43"} Dec 08 19:35:10 crc kubenswrapper[5118]: I1208 19:35:10.030833 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2p8vn" event={"ID":"a0364b29-2456-4ccd-8b62-0374c2c8959c","Type":"ContainerStarted","Data":"e1f2d1fc1d02534e3bfe2cd9f9fc763f160fb9ebecb7ecfa2814c111a65c6b3d"} Dec 08 19:35:11 crc kubenswrapper[5118]: I1208 19:35:11.039431 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6cgbn" event={"ID":"5cd47479-86df-4175-ac1c-96ae73b2db76","Type":"ContainerStarted","Data":"b18980ffae174eb081bf5965f4e3a6cd98977edf080d7c04da7fac8183cf0a9f"} Dec 08 19:35:11 crc kubenswrapper[5118]: I1208 19:35:11.061016 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6cgbn" podStartSLOduration=4.520955376 podStartE2EDuration="5.06098914s" podCreationTimestamp="2025-12-08 19:35:06 +0000 UTC" firstStartedPulling="2025-12-08 19:35:07.997290121 +0000 UTC m=+360.290135578" lastFinishedPulling="2025-12-08 19:35:08.537323885 +0000 UTC m=+360.830169342" observedRunningTime="2025-12-08 19:35:11.057967576 +0000 UTC m=+363.350813073" watchObservedRunningTime="2025-12-08 19:35:11.06098914 +0000 UTC m=+363.353834617" Dec 08 19:35:11 crc kubenswrapper[5118]: I1208 19:35:11.061783 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2p8vn" podStartSLOduration=4.511579494 podStartE2EDuration="5.061776751s" podCreationTimestamp="2025-12-08 19:35:06 +0000 UTC" firstStartedPulling="2025-12-08 19:35:08.000575791 +0000 UTC m=+360.293421258" lastFinishedPulling="2025-12-08 19:35:08.550773058 +0000 UTC m=+360.843618515" observedRunningTime="2025-12-08 19:35:10.078095187 +0000 UTC m=+362.370940644" watchObservedRunningTime="2025-12-08 19:35:11.061776751 +0000 UTC m=+363.354622218" Dec 08 19:35:14 crc kubenswrapper[5118]: I1208 19:35:14.668536 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-nn7wp" Dec 08 19:35:14 crc kubenswrapper[5118]: I1208 19:35:14.668855 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nn7wp" Dec 08 19:35:14 crc kubenswrapper[5118]: I1208 19:35:14.741301 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nn7wp" Dec 08 19:35:14 crc kubenswrapper[5118]: I1208 19:35:14.834721 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-vsd29" Dec 08 19:35:14 crc kubenswrapper[5118]: I1208 19:35:14.834775 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vsd29" Dec 08 19:35:14 crc kubenswrapper[5118]: I1208 19:35:14.877717 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vsd29" Dec 08 19:35:15 crc kubenswrapper[5118]: I1208 19:35:15.107384 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vsd29" Dec 08 19:35:15 crc kubenswrapper[5118]: I1208 19:35:15.116768 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nn7wp" Dec 08 19:35:17 crc kubenswrapper[5118]: I1208 19:35:17.065395 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2p8vn" Dec 08 19:35:17 crc kubenswrapper[5118]: I1208 19:35:17.065708 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-2p8vn" Dec 08 19:35:17 crc kubenswrapper[5118]: I1208 19:35:17.110128 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2p8vn" Dec 08 19:35:17 crc kubenswrapper[5118]: I1208 19:35:17.154517 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2p8vn" Dec 08 19:35:17 crc kubenswrapper[5118]: I1208 19:35:17.320768 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6cgbn" Dec 08 19:35:17 crc kubenswrapper[5118]: I1208 19:35:17.320847 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-6cgbn" Dec 08 19:35:17 crc kubenswrapper[5118]: I1208 19:35:17.380505 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6cgbn" Dec 08 19:35:18 crc kubenswrapper[5118]: I1208 19:35:18.149732 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6cgbn" Dec 08 19:36:09 crc kubenswrapper[5118]: I1208 19:36:09.468090 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:36:09 crc kubenswrapper[5118]: I1208 19:36:09.469189 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:36:39 crc kubenswrapper[5118]: I1208 19:36:39.467979 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:36:39 crc kubenswrapper[5118]: I1208 19:36:39.468816 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:37:09 crc kubenswrapper[5118]: I1208 19:37:09.468184 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:37:09 crc kubenswrapper[5118]: I1208 19:37:09.468738 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:37:09 crc kubenswrapper[5118]: I1208 19:37:09.468805 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:37:09 crc kubenswrapper[5118]: I1208 19:37:09.469602 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9a83000e5f1454b2084ae52cb60bef4a8eb4e2dca054391d550af658c8371fed"} pod="openshift-machine-config-operator/machine-config-daemon-twnt9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:37:09 crc kubenswrapper[5118]: I1208 19:37:09.469736 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" containerID="cri-o://9a83000e5f1454b2084ae52cb60bef4a8eb4e2dca054391d550af658c8371fed" gracePeriod=600 Dec 08 19:37:09 crc kubenswrapper[5118]: I1208 19:37:09.798902 5118 generic.go:358] "Generic (PLEG): container finished" podID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerID="9a83000e5f1454b2084ae52cb60bef4a8eb4e2dca054391d550af658c8371fed" exitCode=0 Dec 08 19:37:09 crc kubenswrapper[5118]: I1208 19:37:09.798986 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" event={"ID":"0052f7cb-2eab-42e7-8f98-b1544811d9c3","Type":"ContainerDied","Data":"9a83000e5f1454b2084ae52cb60bef4a8eb4e2dca054391d550af658c8371fed"} Dec 08 19:37:09 crc kubenswrapper[5118]: I1208 19:37:09.799338 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" event={"ID":"0052f7cb-2eab-42e7-8f98-b1544811d9c3","Type":"ContainerStarted","Data":"9d9ec033c2d11bd8a4bc45cbc441ba68a7926e1d3c57f8675045fe5aa0fb6da7"} Dec 08 19:37:09 crc kubenswrapper[5118]: I1208 19:37:09.799359 5118 scope.go:117] "RemoveContainer" containerID="92647ff13fb1d82844fdc1c78fadbe5a9f72de51c235d82acb429790753aa73b" Dec 08 19:39:08 crc kubenswrapper[5118]: I1208 19:39:08.321285 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 19:39:08 crc kubenswrapper[5118]: I1208 19:39:08.321599 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 19:39:09 crc kubenswrapper[5118]: I1208 19:39:09.468013 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:39:09 crc kubenswrapper[5118]: I1208 19:39:09.468201 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:39:26 crc kubenswrapper[5118]: I1208 19:39:26.723559 5118 ???:1] "http: TLS handshake error from 192.168.126.11:51578: no serving certificate available for the kubelet" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.215430 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2"] Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.216163 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" podUID="fc62458c-133b-4909-91ab-b28870b78816" containerName="kube-rbac-proxy" containerID="cri-o://ebb7bab4f88ec8dba2d4335caae7a71c141f61dd649a4738a19dac43d9570695" gracePeriod=30 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.216248 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" podUID="fc62458c-133b-4909-91ab-b28870b78816" containerName="ovnkube-cluster-manager" containerID="cri-o://3c76de142b2f857046a2b0c4f36c88cf22b35c02d03f8513c826780f31014de4" gracePeriod=30 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.403662 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.427552 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k6klf"] Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.430079 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="ovn-controller" containerID="cri-o://308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198" gracePeriod=30 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.430263 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1" gracePeriod=30 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.430313 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="kube-rbac-proxy-node" containerID="cri-o://1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c" gracePeriod=30 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.430314 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="northd" containerID="cri-o://7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e" gracePeriod=30 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.430374 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="ovn-acl-logging" containerID="cri-o://16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5" gracePeriod=30 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.430646 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="sbdb" containerID="cri-o://7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a" gracePeriod=30 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.430750 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="nbdb" containerID="cri-o://dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d" gracePeriod=30 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.445052 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-hb854"] Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.445608 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc62458c-133b-4909-91ab-b28870b78816" containerName="kube-rbac-proxy" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.445620 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc62458c-133b-4909-91ab-b28870b78816" containerName="kube-rbac-proxy" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.445643 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc62458c-133b-4909-91ab-b28870b78816" containerName="ovnkube-cluster-manager" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.445649 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc62458c-133b-4909-91ab-b28870b78816" containerName="ovnkube-cluster-manager" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.445762 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="fc62458c-133b-4909-91ab-b28870b78816" containerName="kube-rbac-proxy" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.445776 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="fc62458c-133b-4909-91ab-b28870b78816" containerName="ovnkube-cluster-manager" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.452467 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-hb854" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.462230 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="ovnkube-controller" containerID="cri-o://ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be" gracePeriod=30 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.522142 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc62458c-133b-4909-91ab-b28870b78816-ovnkube-config\") pod \"fc62458c-133b-4909-91ab-b28870b78816\" (UID: \"fc62458c-133b-4909-91ab-b28870b78816\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.522199 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc62458c-133b-4909-91ab-b28870b78816-env-overrides\") pod \"fc62458c-133b-4909-91ab-b28870b78816\" (UID: \"fc62458c-133b-4909-91ab-b28870b78816\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.522229 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc62458c-133b-4909-91ab-b28870b78816-ovn-control-plane-metrics-cert\") pod \"fc62458c-133b-4909-91ab-b28870b78816\" (UID: \"fc62458c-133b-4909-91ab-b28870b78816\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.522393 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h27mb\" (UniqueName: \"kubernetes.io/projected/fc62458c-133b-4909-91ab-b28870b78816-kube-api-access-h27mb\") pod \"fc62458c-133b-4909-91ab-b28870b78816\" (UID: \"fc62458c-133b-4909-91ab-b28870b78816\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.522882 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc62458c-133b-4909-91ab-b28870b78816-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "fc62458c-133b-4909-91ab-b28870b78816" (UID: "fc62458c-133b-4909-91ab-b28870b78816"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.522919 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc62458c-133b-4909-91ab-b28870b78816-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "fc62458c-133b-4909-91ab-b28870b78816" (UID: "fc62458c-133b-4909-91ab-b28870b78816"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.527347 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc62458c-133b-4909-91ab-b28870b78816-kube-api-access-h27mb" (OuterVolumeSpecName: "kube-api-access-h27mb") pod "fc62458c-133b-4909-91ab-b28870b78816" (UID: "fc62458c-133b-4909-91ab-b28870b78816"). InnerVolumeSpecName "kube-api-access-h27mb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.527630 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc62458c-133b-4909-91ab-b28870b78816-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "fc62458c-133b-4909-91ab-b28870b78816" (UID: "fc62458c-133b-4909-91ab-b28870b78816"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.623853 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6caec332-9df1-4299-978e-1c8fbe14f2c1-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-hb854\" (UID: \"6caec332-9df1-4299-978e-1c8fbe14f2c1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-hb854" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.623899 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6caec332-9df1-4299-978e-1c8fbe14f2c1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-hb854\" (UID: \"6caec332-9df1-4299-978e-1c8fbe14f2c1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-hb854" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.623935 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6caec332-9df1-4299-978e-1c8fbe14f2c1-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-hb854\" (UID: \"6caec332-9df1-4299-978e-1c8fbe14f2c1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-hb854" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.624012 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn5rs\" (UniqueName: \"kubernetes.io/projected/6caec332-9df1-4299-978e-1c8fbe14f2c1-kube-api-access-bn5rs\") pod \"ovnkube-control-plane-97c9b6c48-hb854\" (UID: \"6caec332-9df1-4299-978e-1c8fbe14f2c1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-hb854" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.624058 5118 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc62458c-133b-4909-91ab-b28870b78816-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.624068 5118 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc62458c-133b-4909-91ab-b28870b78816-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.624078 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h27mb\" (UniqueName: \"kubernetes.io/projected/fc62458c-133b-4909-91ab-b28870b78816-kube-api-access-h27mb\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.624086 5118 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc62458c-133b-4909-91ab-b28870b78816-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.676318 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6klf_e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6/ovn-acl-logging/0.log" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.676760 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6klf_e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6/ovn-controller/0.log" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.677142 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.698824 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6klf_e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6/ovn-acl-logging/0.log" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.699276 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6klf_e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6/ovn-controller/0.log" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.699707 5118 generic.go:358] "Generic (PLEG): container finished" podID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerID="ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be" exitCode=0 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.699730 5118 generic.go:358] "Generic (PLEG): container finished" podID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerID="7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a" exitCode=0 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.699738 5118 generic.go:358] "Generic (PLEG): container finished" podID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerID="dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d" exitCode=0 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.699745 5118 generic.go:358] "Generic (PLEG): container finished" podID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerID="7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e" exitCode=0 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.699753 5118 generic.go:358] "Generic (PLEG): container finished" podID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerID="964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1" exitCode=0 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.699760 5118 generic.go:358] "Generic (PLEG): container finished" podID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerID="1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c" exitCode=0 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.699768 5118 generic.go:358] "Generic (PLEG): container finished" podID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerID="16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5" exitCode=143 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.699775 5118 generic.go:358] "Generic (PLEG): container finished" podID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerID="308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198" exitCode=143 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.699866 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" event={"ID":"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6","Type":"ContainerDied","Data":"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.699895 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" event={"ID":"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6","Type":"ContainerDied","Data":"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.699908 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" event={"ID":"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6","Type":"ContainerDied","Data":"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.699921 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" event={"ID":"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6","Type":"ContainerDied","Data":"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.699934 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" event={"ID":"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6","Type":"ContainerDied","Data":"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.699946 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" event={"ID":"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6","Type":"ContainerDied","Data":"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.699947 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.699975 5118 scope.go:117] "RemoveContainer" containerID="ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.699958 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700150 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700157 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700166 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" event={"ID":"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6","Type":"ContainerDied","Data":"16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700176 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700181 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700186 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700191 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700196 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700201 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700206 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700210 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700215 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700222 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" event={"ID":"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6","Type":"ContainerDied","Data":"308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700230 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700236 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700241 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700246 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700252 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700257 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700261 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700266 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700271 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700278 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6klf" event={"ID":"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6","Type":"ContainerDied","Data":"65978c9b871deab25ae63164fbd953cfd3bac8ab2f630085a500440e9fba4afa"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700286 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700293 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700298 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700303 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700308 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700313 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700318 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700323 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.700328 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.708888 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-j4b8g_1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742/kube-multus/0.log" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.708960 5118 generic.go:358] "Generic (PLEG): container finished" podID="1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742" containerID="d4e35812f048f9b4a1f8a2dfc7e60eb1a2d7df2bce39455c9e8ba7657e3b9fb8" exitCode=2 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.709118 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-j4b8g" event={"ID":"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742","Type":"ContainerDied","Data":"d4e35812f048f9b4a1f8a2dfc7e60eb1a2d7df2bce39455c9e8ba7657e3b9fb8"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.711042 5118 scope.go:117] "RemoveContainer" containerID="d4e35812f048f9b4a1f8a2dfc7e60eb1a2d7df2bce39455c9e8ba7657e3b9fb8" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.712805 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.714739 5118 generic.go:358] "Generic (PLEG): container finished" podID="fc62458c-133b-4909-91ab-b28870b78816" containerID="3c76de142b2f857046a2b0c4f36c88cf22b35c02d03f8513c826780f31014de4" exitCode=0 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.714826 5118 generic.go:358] "Generic (PLEG): container finished" podID="fc62458c-133b-4909-91ab-b28870b78816" containerID="ebb7bab4f88ec8dba2d4335caae7a71c141f61dd649a4738a19dac43d9570695" exitCode=0 Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.714886 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.714780 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" event={"ID":"fc62458c-133b-4909-91ab-b28870b78816","Type":"ContainerDied","Data":"3c76de142b2f857046a2b0c4f36c88cf22b35c02d03f8513c826780f31014de4"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.715097 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c76de142b2f857046a2b0c4f36c88cf22b35c02d03f8513c826780f31014de4"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.715142 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ebb7bab4f88ec8dba2d4335caae7a71c141f61dd649a4738a19dac43d9570695"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.715160 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" event={"ID":"fc62458c-133b-4909-91ab-b28870b78816","Type":"ContainerDied","Data":"ebb7bab4f88ec8dba2d4335caae7a71c141f61dd649a4738a19dac43d9570695"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.715181 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c76de142b2f857046a2b0c4f36c88cf22b35c02d03f8513c826780f31014de4"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.715189 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ebb7bab4f88ec8dba2d4335caae7a71c141f61dd649a4738a19dac43d9570695"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.715197 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2" event={"ID":"fc62458c-133b-4909-91ab-b28870b78816","Type":"ContainerDied","Data":"8fa61aee39a0d2068ea74bfb6b90c57ef232abf87c714faa1fe72a465724906d"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.715206 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c76de142b2f857046a2b0c4f36c88cf22b35c02d03f8513c826780f31014de4"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.715213 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ebb7bab4f88ec8dba2d4335caae7a71c141f61dd649a4738a19dac43d9570695"} Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.725343 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bn5rs\" (UniqueName: \"kubernetes.io/projected/6caec332-9df1-4299-978e-1c8fbe14f2c1-kube-api-access-bn5rs\") pod \"ovnkube-control-plane-97c9b6c48-hb854\" (UID: \"6caec332-9df1-4299-978e-1c8fbe14f2c1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-hb854" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.725390 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6caec332-9df1-4299-978e-1c8fbe14f2c1-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-hb854\" (UID: \"6caec332-9df1-4299-978e-1c8fbe14f2c1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-hb854" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.725410 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6caec332-9df1-4299-978e-1c8fbe14f2c1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-hb854\" (UID: \"6caec332-9df1-4299-978e-1c8fbe14f2c1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-hb854" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.725444 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6caec332-9df1-4299-978e-1c8fbe14f2c1-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-hb854\" (UID: \"6caec332-9df1-4299-978e-1c8fbe14f2c1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-hb854" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.726269 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6caec332-9df1-4299-978e-1c8fbe14f2c1-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-hb854\" (UID: \"6caec332-9df1-4299-978e-1c8fbe14f2c1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-hb854" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.726293 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6caec332-9df1-4299-978e-1c8fbe14f2c1-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-hb854\" (UID: \"6caec332-9df1-4299-978e-1c8fbe14f2c1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-hb854" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.733086 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6caec332-9df1-4299-978e-1c8fbe14f2c1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-hb854\" (UID: \"6caec332-9df1-4299-978e-1c8fbe14f2c1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-hb854" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.734154 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-f6vpt"] Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.734846 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.734869 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.734883 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="ovn-acl-logging" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.734891 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="ovn-acl-logging" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.734902 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="northd" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.734909 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="northd" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.734919 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="ovn-controller" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.734926 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="ovn-controller" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.734936 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="kube-rbac-proxy-node" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.734944 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="kube-rbac-proxy-node" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.734954 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="nbdb" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.734961 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="nbdb" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.734969 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="sbdb" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.734976 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="sbdb" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.735001 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="ovnkube-controller" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.735008 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="ovnkube-controller" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.735019 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="kubecfg-setup" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.735026 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="kubecfg-setup" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.735152 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="ovn-controller" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.735165 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="ovnkube-controller" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.735175 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="sbdb" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.735186 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="nbdb" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.735195 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.735204 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="kube-rbac-proxy-node" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.735212 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="ovn-acl-logging" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.735222 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" containerName="northd" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.748809 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.755501 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2"] Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.758626 5118 scope.go:117] "RemoveContainer" containerID="7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.759697 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-r2hg2"] Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.761368 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn5rs\" (UniqueName: \"kubernetes.io/projected/6caec332-9df1-4299-978e-1c8fbe14f2c1-kube-api-access-bn5rs\") pod \"ovnkube-control-plane-97c9b6c48-hb854\" (UID: \"6caec332-9df1-4299-978e-1c8fbe14f2c1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-hb854" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.770411 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-hb854" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.775934 5118 scope.go:117] "RemoveContainer" containerID="dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.798875 5118 scope.go:117] "RemoveContainer" containerID="7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.812187 5118 scope.go:117] "RemoveContainer" containerID="964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.826045 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-run-systemd\") pod \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.826114 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-env-overrides\") pod \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.826133 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-var-lib-openvswitch\") pod \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.826359 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" (UID: "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.826812 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" (UID: "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.826872 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-cni-netd\") pod \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.826900 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-kubelet\") pod \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.826943 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" (UID: "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.826971 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-slash\") pod \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.826991 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-ovnkube-config\") pod \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827038 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" (UID: "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827040 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-slash" (OuterVolumeSpecName: "host-slash") pod "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" (UID: "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827089 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-run-ovn-kubernetes\") pod \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827144 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-ovnkube-script-lib\") pod \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827207 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" (UID: "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827342 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqt29\" (UniqueName: \"kubernetes.io/projected/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-kube-api-access-nqt29\") pod \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827404 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-run-netns\") pod \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827437 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-systemd-units\") pod \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827556 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-cni-bin\") pod \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827484 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" (UID: "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827590 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-run-openvswitch\") pod \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827615 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-run-ovn\") pod \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827505 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" (UID: "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827567 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" (UID: "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827610 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" (UID: "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827644 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" (UID: "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827616 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" (UID: "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827654 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-node-log\") pod \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827675 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" (UID: "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827676 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-etc-openvswitch\") pod \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827724 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" (UID: "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827754 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-ovn-node-metrics-cert\") pod \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827774 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-node-log" (OuterVolumeSpecName: "node-log") pod "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" (UID: "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827793 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827811 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" (UID: "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.827863 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-log-socket\") pod \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\" (UID: \"e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6\") " Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.828108 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-log-socket" (OuterVolumeSpecName: "log-socket") pod "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" (UID: "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.828288 5118 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.828302 5118 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-node-log\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.828311 5118 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.828320 5118 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.828331 5118 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-log-socket\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.828341 5118 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.828353 5118 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.828361 5118 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.828369 5118 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.828377 5118 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-slash\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.828385 5118 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.828393 5118 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.828401 5118 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.828408 5118 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.828416 5118 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.828424 5118 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.828433 5118 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.831579 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-kube-api-access-nqt29" (OuterVolumeSpecName: "kube-api-access-nqt29") pod "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" (UID: "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6"). InnerVolumeSpecName "kube-api-access-nqt29". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.831973 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" (UID: "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.832845 5118 scope.go:117] "RemoveContainer" containerID="1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.844100 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" (UID: "e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.846908 5118 scope.go:117] "RemoveContainer" containerID="16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.868346 5118 scope.go:117] "RemoveContainer" containerID="308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.885082 5118 scope.go:117] "RemoveContainer" containerID="8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.910261 5118 scope.go:117] "RemoveContainer" containerID="ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be" Dec 08 19:39:27 crc kubenswrapper[5118]: E1208 19:39:27.910733 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be\": container with ID starting with ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be not found: ID does not exist" containerID="ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.910761 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be"} err="failed to get container status \"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be\": rpc error: code = NotFound desc = could not find container \"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be\": container with ID starting with ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.910798 5118 scope.go:117] "RemoveContainer" containerID="7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a" Dec 08 19:39:27 crc kubenswrapper[5118]: E1208 19:39:27.911092 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a\": container with ID starting with 7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a not found: ID does not exist" containerID="7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.911141 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a"} err="failed to get container status \"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a\": rpc error: code = NotFound desc = could not find container \"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a\": container with ID starting with 7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.911154 5118 scope.go:117] "RemoveContainer" containerID="dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d" Dec 08 19:39:27 crc kubenswrapper[5118]: E1208 19:39:27.911499 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d\": container with ID starting with dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d not found: ID does not exist" containerID="dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.911546 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d"} err="failed to get container status \"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d\": rpc error: code = NotFound desc = could not find container \"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d\": container with ID starting with dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.911560 5118 scope.go:117] "RemoveContainer" containerID="7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e" Dec 08 19:39:27 crc kubenswrapper[5118]: E1208 19:39:27.913445 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e\": container with ID starting with 7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e not found: ID does not exist" containerID="7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.913473 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e"} err="failed to get container status \"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e\": rpc error: code = NotFound desc = could not find container \"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e\": container with ID starting with 7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.913514 5118 scope.go:117] "RemoveContainer" containerID="964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1" Dec 08 19:39:27 crc kubenswrapper[5118]: E1208 19:39:27.913985 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1\": container with ID starting with 964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1 not found: ID does not exist" containerID="964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.914039 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1"} err="failed to get container status \"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1\": rpc error: code = NotFound desc = could not find container \"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1\": container with ID starting with 964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1 not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.914062 5118 scope.go:117] "RemoveContainer" containerID="1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c" Dec 08 19:39:27 crc kubenswrapper[5118]: E1208 19:39:27.914318 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c\": container with ID starting with 1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c not found: ID does not exist" containerID="1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.914345 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c"} err="failed to get container status \"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c\": rpc error: code = NotFound desc = could not find container \"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c\": container with ID starting with 1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.914360 5118 scope.go:117] "RemoveContainer" containerID="16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5" Dec 08 19:39:27 crc kubenswrapper[5118]: E1208 19:39:27.914577 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5\": container with ID starting with 16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5 not found: ID does not exist" containerID="16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.914601 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5"} err="failed to get container status \"16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5\": rpc error: code = NotFound desc = could not find container \"16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5\": container with ID starting with 16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5 not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.914644 5118 scope.go:117] "RemoveContainer" containerID="308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198" Dec 08 19:39:27 crc kubenswrapper[5118]: E1208 19:39:27.914911 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198\": container with ID starting with 308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198 not found: ID does not exist" containerID="308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.914937 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198"} err="failed to get container status \"308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198\": rpc error: code = NotFound desc = could not find container \"308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198\": container with ID starting with 308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198 not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.914953 5118 scope.go:117] "RemoveContainer" containerID="8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed" Dec 08 19:39:27 crc kubenswrapper[5118]: E1208 19:39:27.915180 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed\": container with ID starting with 8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed not found: ID does not exist" containerID="8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.915205 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed"} err="failed to get container status \"8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed\": rpc error: code = NotFound desc = could not find container \"8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed\": container with ID starting with 8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.915221 5118 scope.go:117] "RemoveContainer" containerID="ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.915445 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be"} err="failed to get container status \"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be\": rpc error: code = NotFound desc = could not find container \"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be\": container with ID starting with ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.915467 5118 scope.go:117] "RemoveContainer" containerID="7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.915674 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a"} err="failed to get container status \"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a\": rpc error: code = NotFound desc = could not find container \"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a\": container with ID starting with 7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.915714 5118 scope.go:117] "RemoveContainer" containerID="dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.915929 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d"} err="failed to get container status \"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d\": rpc error: code = NotFound desc = could not find container \"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d\": container with ID starting with dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.915951 5118 scope.go:117] "RemoveContainer" containerID="7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.916159 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e"} err="failed to get container status \"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e\": rpc error: code = NotFound desc = could not find container \"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e\": container with ID starting with 7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.916180 5118 scope.go:117] "RemoveContainer" containerID="964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.916385 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1"} err="failed to get container status \"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1\": rpc error: code = NotFound desc = could not find container \"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1\": container with ID starting with 964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1 not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.916406 5118 scope.go:117] "RemoveContainer" containerID="1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.916627 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c"} err="failed to get container status \"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c\": rpc error: code = NotFound desc = could not find container \"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c\": container with ID starting with 1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.916650 5118 scope.go:117] "RemoveContainer" containerID="16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.916895 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5"} err="failed to get container status \"16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5\": rpc error: code = NotFound desc = could not find container \"16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5\": container with ID starting with 16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5 not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.916917 5118 scope.go:117] "RemoveContainer" containerID="308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.917165 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198"} err="failed to get container status \"308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198\": rpc error: code = NotFound desc = could not find container \"308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198\": container with ID starting with 308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198 not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.917190 5118 scope.go:117] "RemoveContainer" containerID="8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.917428 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed"} err="failed to get container status \"8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed\": rpc error: code = NotFound desc = could not find container \"8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed\": container with ID starting with 8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.917453 5118 scope.go:117] "RemoveContainer" containerID="ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.917722 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be"} err="failed to get container status \"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be\": rpc error: code = NotFound desc = could not find container \"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be\": container with ID starting with ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.917747 5118 scope.go:117] "RemoveContainer" containerID="7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.919719 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a"} err="failed to get container status \"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a\": rpc error: code = NotFound desc = could not find container \"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a\": container with ID starting with 7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.919740 5118 scope.go:117] "RemoveContainer" containerID="dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.920321 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d"} err="failed to get container status \"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d\": rpc error: code = NotFound desc = could not find container \"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d\": container with ID starting with dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.920338 5118 scope.go:117] "RemoveContainer" containerID="7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.920965 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e"} err="failed to get container status \"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e\": rpc error: code = NotFound desc = could not find container \"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e\": container with ID starting with 7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.920985 5118 scope.go:117] "RemoveContainer" containerID="964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.921503 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1"} err="failed to get container status \"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1\": rpc error: code = NotFound desc = could not find container \"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1\": container with ID starting with 964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1 not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.921522 5118 scope.go:117] "RemoveContainer" containerID="1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.921818 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c"} err="failed to get container status \"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c\": rpc error: code = NotFound desc = could not find container \"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c\": container with ID starting with 1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.921841 5118 scope.go:117] "RemoveContainer" containerID="16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.922372 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5"} err="failed to get container status \"16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5\": rpc error: code = NotFound desc = could not find container \"16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5\": container with ID starting with 16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5 not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.922392 5118 scope.go:117] "RemoveContainer" containerID="308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.922574 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198"} err="failed to get container status \"308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198\": rpc error: code = NotFound desc = could not find container \"308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198\": container with ID starting with 308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198 not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.922592 5118 scope.go:117] "RemoveContainer" containerID="8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.922864 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed"} err="failed to get container status \"8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed\": rpc error: code = NotFound desc = could not find container \"8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed\": container with ID starting with 8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.922888 5118 scope.go:117] "RemoveContainer" containerID="ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.923119 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be"} err="failed to get container status \"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be\": rpc error: code = NotFound desc = could not find container \"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be\": container with ID starting with ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.923138 5118 scope.go:117] "RemoveContainer" containerID="7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.923324 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a"} err="failed to get container status \"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a\": rpc error: code = NotFound desc = could not find container \"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a\": container with ID starting with 7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.923339 5118 scope.go:117] "RemoveContainer" containerID="dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.923475 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d"} err="failed to get container status \"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d\": rpc error: code = NotFound desc = could not find container \"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d\": container with ID starting with dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.923490 5118 scope.go:117] "RemoveContainer" containerID="7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.923742 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e"} err="failed to get container status \"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e\": rpc error: code = NotFound desc = could not find container \"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e\": container with ID starting with 7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.923766 5118 scope.go:117] "RemoveContainer" containerID="964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.926852 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1"} err="failed to get container status \"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1\": rpc error: code = NotFound desc = could not find container \"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1\": container with ID starting with 964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1 not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.926876 5118 scope.go:117] "RemoveContainer" containerID="1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.927103 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c"} err="failed to get container status \"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c\": rpc error: code = NotFound desc = could not find container \"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c\": container with ID starting with 1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.927125 5118 scope.go:117] "RemoveContainer" containerID="16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.927495 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5"} err="failed to get container status \"16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5\": rpc error: code = NotFound desc = could not find container \"16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5\": container with ID starting with 16a5d0a07fe2be168188965ddec7b0ccd83fefe3d8ea0bfcaba5b2fb5eb553f5 not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.927513 5118 scope.go:117] "RemoveContainer" containerID="308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.927873 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198"} err="failed to get container status \"308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198\": rpc error: code = NotFound desc = could not find container \"308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198\": container with ID starting with 308b611767358db0560044272295b05a0ecb78f0715de45a317d3bf8f3465198 not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.927897 5118 scope.go:117] "RemoveContainer" containerID="8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.928092 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed"} err="failed to get container status \"8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed\": rpc error: code = NotFound desc = could not find container \"8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed\": container with ID starting with 8eafc5a5943a0efd3df41e788ced7fac2f2d818aae95069f43a9c55ea47b45ed not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.928112 5118 scope.go:117] "RemoveContainer" containerID="ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.928336 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be"} err="failed to get container status \"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be\": rpc error: code = NotFound desc = could not find container \"ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be\": container with ID starting with ccac616f95e5a3e959905f32011399864ca7e826dc57d77b07bd360f688ca4be not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.928357 5118 scope.go:117] "RemoveContainer" containerID="7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.928571 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a"} err="failed to get container status \"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a\": rpc error: code = NotFound desc = could not find container \"7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a\": container with ID starting with 7673339448d7fbfac9a5a1a9a169f466c084713384fef5db8c2f101386f37a8a not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.928620 5118 scope.go:117] "RemoveContainer" containerID="dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.928870 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d"} err="failed to get container status \"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d\": rpc error: code = NotFound desc = could not find container \"dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d\": container with ID starting with dbba3b9d9d4a61d85fd31f88ed5efe2a077fd84ef8a1d4cfb32cf81465c1c29d not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.928919 5118 scope.go:117] "RemoveContainer" containerID="7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.929281 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e"} err="failed to get container status \"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e\": rpc error: code = NotFound desc = could not find container \"7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e\": container with ID starting with 7f54a093b205e611f1c91216114c2b639353484bcaaea51c81791b473b5cb12e not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.929305 5118 scope.go:117] "RemoveContainer" containerID="964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.929307 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-run-ovn-kubernetes\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.929340 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-run-systemd\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.929369 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-run-openvswitch\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.929396 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-slash\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.929424 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-node-log\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.929447 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-kubelet\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.929482 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-var-lib-openvswitch\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.929503 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-cni-bin\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.929526 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc98af3f-58df-4fba-a276-24bee012837e-ovn-node-metrics-cert\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.929554 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-systemd-units\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.929572 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc98af3f-58df-4fba-a276-24bee012837e-ovnkube-config\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.929586 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fc98af3f-58df-4fba-a276-24bee012837e-ovnkube-script-lib\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.929604 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-run-ovn\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.929617 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-cni-netd\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.929740 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-log-socket\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.929777 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg27c\" (UniqueName: \"kubernetes.io/projected/fc98af3f-58df-4fba-a276-24bee012837e-kube-api-access-gg27c\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.929958 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-etc-openvswitch\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.929985 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.930196 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-run-netns\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.930223 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc98af3f-58df-4fba-a276-24bee012837e-env-overrides\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.930541 5118 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.930553 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nqt29\" (UniqueName: \"kubernetes.io/projected/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-kube-api-access-nqt29\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.930967 5118 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.931186 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1"} err="failed to get container status \"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1\": rpc error: code = NotFound desc = could not find container \"964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1\": container with ID starting with 964a145fdb8ecbc12e39dc3fb3d0696633eb9f066ca8746f21035fdbe1f716b1 not found: ID does not exist" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.931232 5118 scope.go:117] "RemoveContainer" containerID="1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c" Dec 08 19:39:27 crc kubenswrapper[5118]: I1208 19:39:27.931482 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c"} err="failed to get container status \"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c\": rpc error: code = NotFound desc = could not find container \"1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c\": container with ID starting with 1a96cf14b9cdcef88cd53495e6fd5fcca6aa21c129cdf45c55e601b82eb17f9c not found: ID does not exist" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.032136 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-run-ovn-kubernetes\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.032217 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-run-systemd\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.032245 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-run-openvswitch\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.033190 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-run-ovn-kubernetes\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.033256 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-slash\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.033311 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-node-log\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.033743 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-run-openvswitch\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.033682 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-slash\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.033925 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-node-log\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.033949 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-run-systemd\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.034031 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-kubelet\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.033584 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-kubelet\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.034623 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-var-lib-openvswitch\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.034648 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-cni-bin\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.034711 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc98af3f-58df-4fba-a276-24bee012837e-ovn-node-metrics-cert\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.034738 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-systemd-units\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.034770 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc98af3f-58df-4fba-a276-24bee012837e-ovnkube-config\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.034788 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fc98af3f-58df-4fba-a276-24bee012837e-ovnkube-script-lib\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.034816 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-run-ovn\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.034833 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-cni-netd\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.034881 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-log-socket\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.034903 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gg27c\" (UniqueName: \"kubernetes.io/projected/fc98af3f-58df-4fba-a276-24bee012837e-kube-api-access-gg27c\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.034929 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-etc-openvswitch\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.034960 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.034985 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-run-netns\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.035048 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc98af3f-58df-4fba-a276-24bee012837e-env-overrides\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.035742 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc98af3f-58df-4fba-a276-24bee012837e-env-overrides\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.035797 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-var-lib-openvswitch\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.035828 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-cni-bin\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.040361 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-cni-netd\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.040465 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-systemd-units\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.041342 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-etc-openvswitch\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.041377 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc98af3f-58df-4fba-a276-24bee012837e-ovnkube-config\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.041384 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-log-socket\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.041424 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-run-ovn\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.041472 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.041472 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fc98af3f-58df-4fba-a276-24bee012837e-host-run-netns\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.042286 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fc98af3f-58df-4fba-a276-24bee012837e-ovnkube-script-lib\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.042712 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc98af3f-58df-4fba-a276-24bee012837e-ovn-node-metrics-cert\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.043044 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k6klf"] Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.047125 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k6klf"] Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.064737 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg27c\" (UniqueName: \"kubernetes.io/projected/fc98af3f-58df-4fba-a276-24bee012837e-kube-api-access-gg27c\") pod \"ovnkube-node-f6vpt\" (UID: \"fc98af3f-58df-4fba-a276-24bee012837e\") " pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.097194 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.104025 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6" path="/var/lib/kubelet/pods/e2b3e2b7-9ad6-416d-b00a-ac9bffbdd6a6/volumes" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.105442 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc62458c-133b-4909-91ab-b28870b78816" path="/var/lib/kubelet/pods/fc62458c-133b-4909-91ab-b28870b78816/volumes" Dec 08 19:39:28 crc kubenswrapper[5118]: W1208 19:39:28.115511 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc98af3f_58df_4fba_a276_24bee012837e.slice/crio-277ef5a488df1c9a85042d6919527a103bc97c6a8da7a49745087cb23a44beb9 WatchSource:0}: Error finding container 277ef5a488df1c9a85042d6919527a103bc97c6a8da7a49745087cb23a44beb9: Status 404 returned error can't find the container with id 277ef5a488df1c9a85042d6919527a103bc97c6a8da7a49745087cb23a44beb9 Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.725930 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-j4b8g_1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742/kube-multus/0.log" Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.726039 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-j4b8g" event={"ID":"1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742","Type":"ContainerStarted","Data":"30f1af8966bd29fd8427f338434e2254c3b7be58de18ce29b3a346d5947dddeb"} Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.728808 5118 generic.go:358] "Generic (PLEG): container finished" podID="fc98af3f-58df-4fba-a276-24bee012837e" containerID="7c22b7fe172097b2de75cd70704cf48ca59617f3faec53ea3152db7ea90149c9" exitCode=0 Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.728902 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" event={"ID":"fc98af3f-58df-4fba-a276-24bee012837e","Type":"ContainerDied","Data":"7c22b7fe172097b2de75cd70704cf48ca59617f3faec53ea3152db7ea90149c9"} Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.728937 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" event={"ID":"fc98af3f-58df-4fba-a276-24bee012837e","Type":"ContainerStarted","Data":"277ef5a488df1c9a85042d6919527a103bc97c6a8da7a49745087cb23a44beb9"} Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.731511 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-hb854" event={"ID":"6caec332-9df1-4299-978e-1c8fbe14f2c1","Type":"ContainerStarted","Data":"2b6b5c0abe89d271404d152062d744ba640520b1a75a12d36603efb02e17b0a8"} Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.731570 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-hb854" event={"ID":"6caec332-9df1-4299-978e-1c8fbe14f2c1","Type":"ContainerStarted","Data":"ef22fbff4cf6a9a845021d0ed58e8f327e15088f9eaf1e35c6ae17d42f8e2f4a"} Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.731592 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-hb854" event={"ID":"6caec332-9df1-4299-978e-1c8fbe14f2c1","Type":"ContainerStarted","Data":"89af5dd2c5c1140d5fe4cae25b2aa2ef4ba984f18a4a7674d2c6a43008a98024"} Dec 08 19:39:28 crc kubenswrapper[5118]: I1208 19:39:28.774034 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-hb854" podStartSLOduration=1.773999843 podStartE2EDuration="1.773999843s" podCreationTimestamp="2025-12-08 19:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:39:28.76949349 +0000 UTC m=+621.062338957" watchObservedRunningTime="2025-12-08 19:39:28.773999843 +0000 UTC m=+621.066845370" Dec 08 19:39:29 crc kubenswrapper[5118]: I1208 19:39:29.745936 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" event={"ID":"fc98af3f-58df-4fba-a276-24bee012837e","Type":"ContainerStarted","Data":"e1f22caf349f920c8cc48cde9b14fb65df603470f73e686109d5f213f257903d"} Dec 08 19:39:29 crc kubenswrapper[5118]: I1208 19:39:29.746238 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" event={"ID":"fc98af3f-58df-4fba-a276-24bee012837e","Type":"ContainerStarted","Data":"d944bf1a7c86ccf98705901823f6490cedea239bafd98d9f9ff1914513235bd4"} Dec 08 19:39:29 crc kubenswrapper[5118]: I1208 19:39:29.746253 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" event={"ID":"fc98af3f-58df-4fba-a276-24bee012837e","Type":"ContainerStarted","Data":"03b3e1446ab4e1d7f65197c75b21f2208cae107a5e7554c7e99bc7d875b72123"} Dec 08 19:39:29 crc kubenswrapper[5118]: I1208 19:39:29.746265 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" event={"ID":"fc98af3f-58df-4fba-a276-24bee012837e","Type":"ContainerStarted","Data":"f689d8b7326490dfb218e3d1f6c1fedb31b208534aaae0b8c7a8c740417022ae"} Dec 08 19:39:29 crc kubenswrapper[5118]: I1208 19:39:29.746276 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" event={"ID":"fc98af3f-58df-4fba-a276-24bee012837e","Type":"ContainerStarted","Data":"a4e9dd3e958eca1ee7182b6190f8582aee228134cd23236511d421138444466e"} Dec 08 19:39:29 crc kubenswrapper[5118]: I1208 19:39:29.746286 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" event={"ID":"fc98af3f-58df-4fba-a276-24bee012837e","Type":"ContainerStarted","Data":"9a35b7e748e3e8076a7e18477231f0978339af39a9b0afe6d289942943e1671f"} Dec 08 19:39:31 crc kubenswrapper[5118]: I1208 19:39:31.766234 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" event={"ID":"fc98af3f-58df-4fba-a276-24bee012837e","Type":"ContainerStarted","Data":"abdc95aebcb3344280879a7afd1481732df39c353ac1dd4904513f03415c32ed"} Dec 08 19:39:34 crc kubenswrapper[5118]: I1208 19:39:34.791553 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" event={"ID":"fc98af3f-58df-4fba-a276-24bee012837e","Type":"ContainerStarted","Data":"b7aac0812ff17b37da6f96aaa22facee9fcd51c3e7f7e2a8dbef23bf01682b70"} Dec 08 19:39:34 crc kubenswrapper[5118]: I1208 19:39:34.791866 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:34 crc kubenswrapper[5118]: I1208 19:39:34.791879 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:34 crc kubenswrapper[5118]: I1208 19:39:34.791887 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:34 crc kubenswrapper[5118]: I1208 19:39:34.817776 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:34 crc kubenswrapper[5118]: I1208 19:39:34.820271 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" podStartSLOduration=7.820244568 podStartE2EDuration="7.820244568s" podCreationTimestamp="2025-12-08 19:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:39:34.818627554 +0000 UTC m=+627.111473081" watchObservedRunningTime="2025-12-08 19:39:34.820244568 +0000 UTC m=+627.113090025" Dec 08 19:39:34 crc kubenswrapper[5118]: I1208 19:39:34.822100 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:39:39 crc kubenswrapper[5118]: I1208 19:39:39.467356 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:39:39 crc kubenswrapper[5118]: I1208 19:39:39.467893 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:40:06 crc kubenswrapper[5118]: I1208 19:40:06.841117 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f6vpt" Dec 08 19:40:08 crc kubenswrapper[5118]: I1208 19:40:08.361664 5118 scope.go:117] "RemoveContainer" containerID="3c76de142b2f857046a2b0c4f36c88cf22b35c02d03f8513c826780f31014de4" Dec 08 19:40:08 crc kubenswrapper[5118]: I1208 19:40:08.384918 5118 scope.go:117] "RemoveContainer" containerID="ebb7bab4f88ec8dba2d4335caae7a71c141f61dd649a4738a19dac43d9570695" Dec 08 19:40:09 crc kubenswrapper[5118]: I1208 19:40:09.468006 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:40:09 crc kubenswrapper[5118]: I1208 19:40:09.468079 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:40:09 crc kubenswrapper[5118]: I1208 19:40:09.468141 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:40:09 crc kubenswrapper[5118]: I1208 19:40:09.468907 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9d9ec033c2d11bd8a4bc45cbc441ba68a7926e1d3c57f8675045fe5aa0fb6da7"} pod="openshift-machine-config-operator/machine-config-daemon-twnt9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:40:09 crc kubenswrapper[5118]: I1208 19:40:09.468997 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" containerID="cri-o://9d9ec033c2d11bd8a4bc45cbc441ba68a7926e1d3c57f8675045fe5aa0fb6da7" gracePeriod=600 Dec 08 19:40:10 crc kubenswrapper[5118]: I1208 19:40:10.014302 5118 generic.go:358] "Generic (PLEG): container finished" podID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerID="9d9ec033c2d11bd8a4bc45cbc441ba68a7926e1d3c57f8675045fe5aa0fb6da7" exitCode=0 Dec 08 19:40:10 crc kubenswrapper[5118]: I1208 19:40:10.014392 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" event={"ID":"0052f7cb-2eab-42e7-8f98-b1544811d9c3","Type":"ContainerDied","Data":"9d9ec033c2d11bd8a4bc45cbc441ba68a7926e1d3c57f8675045fe5aa0fb6da7"} Dec 08 19:40:10 crc kubenswrapper[5118]: I1208 19:40:10.014892 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" event={"ID":"0052f7cb-2eab-42e7-8f98-b1544811d9c3","Type":"ContainerStarted","Data":"d431454154fbcd4ebfcd3a345d3b257b49f1ea186ad3587cfb5ff74b16d0d0b8"} Dec 08 19:40:10 crc kubenswrapper[5118]: I1208 19:40:10.014944 5118 scope.go:117] "RemoveContainer" containerID="9a83000e5f1454b2084ae52cb60bef4a8eb4e2dca054391d550af658c8371fed" Dec 08 19:40:35 crc kubenswrapper[5118]: I1208 19:40:35.164760 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2p8vn"] Dec 08 19:40:35 crc kubenswrapper[5118]: I1208 19:40:35.165517 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2p8vn" podUID="a0364b29-2456-4ccd-8b62-0374c2c8959c" containerName="registry-server" containerID="cri-o://e1f2d1fc1d02534e3bfe2cd9f9fc763f160fb9ebecb7ecfa2814c111a65c6b3d" gracePeriod=30 Dec 08 19:40:35 crc kubenswrapper[5118]: I1208 19:40:35.530301 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2p8vn" Dec 08 19:40:35 crc kubenswrapper[5118]: I1208 19:40:35.648094 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0364b29-2456-4ccd-8b62-0374c2c8959c-utilities\") pod \"a0364b29-2456-4ccd-8b62-0374c2c8959c\" (UID: \"a0364b29-2456-4ccd-8b62-0374c2c8959c\") " Dec 08 19:40:35 crc kubenswrapper[5118]: I1208 19:40:35.648286 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0364b29-2456-4ccd-8b62-0374c2c8959c-catalog-content\") pod \"a0364b29-2456-4ccd-8b62-0374c2c8959c\" (UID: \"a0364b29-2456-4ccd-8b62-0374c2c8959c\") " Dec 08 19:40:35 crc kubenswrapper[5118]: I1208 19:40:35.648322 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzgsn\" (UniqueName: \"kubernetes.io/projected/a0364b29-2456-4ccd-8b62-0374c2c8959c-kube-api-access-gzgsn\") pod \"a0364b29-2456-4ccd-8b62-0374c2c8959c\" (UID: \"a0364b29-2456-4ccd-8b62-0374c2c8959c\") " Dec 08 19:40:35 crc kubenswrapper[5118]: I1208 19:40:35.649099 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0364b29-2456-4ccd-8b62-0374c2c8959c-utilities" (OuterVolumeSpecName: "utilities") pod "a0364b29-2456-4ccd-8b62-0374c2c8959c" (UID: "a0364b29-2456-4ccd-8b62-0374c2c8959c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:40:35 crc kubenswrapper[5118]: I1208 19:40:35.658764 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0364b29-2456-4ccd-8b62-0374c2c8959c-kube-api-access-gzgsn" (OuterVolumeSpecName: "kube-api-access-gzgsn") pod "a0364b29-2456-4ccd-8b62-0374c2c8959c" (UID: "a0364b29-2456-4ccd-8b62-0374c2c8959c"). InnerVolumeSpecName "kube-api-access-gzgsn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:40:35 crc kubenswrapper[5118]: I1208 19:40:35.663937 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0364b29-2456-4ccd-8b62-0374c2c8959c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0364b29-2456-4ccd-8b62-0374c2c8959c" (UID: "a0364b29-2456-4ccd-8b62-0374c2c8959c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:40:35 crc kubenswrapper[5118]: I1208 19:40:35.749325 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0364b29-2456-4ccd-8b62-0374c2c8959c-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:35 crc kubenswrapper[5118]: I1208 19:40:35.749600 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0364b29-2456-4ccd-8b62-0374c2c8959c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:35 crc kubenswrapper[5118]: I1208 19:40:35.749677 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gzgsn\" (UniqueName: \"kubernetes.io/projected/a0364b29-2456-4ccd-8b62-0374c2c8959c-kube-api-access-gzgsn\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.173606 5118 generic.go:358] "Generic (PLEG): container finished" podID="a0364b29-2456-4ccd-8b62-0374c2c8959c" containerID="e1f2d1fc1d02534e3bfe2cd9f9fc763f160fb9ebecb7ecfa2814c111a65c6b3d" exitCode=0 Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.173767 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2p8vn" event={"ID":"a0364b29-2456-4ccd-8b62-0374c2c8959c","Type":"ContainerDied","Data":"e1f2d1fc1d02534e3bfe2cd9f9fc763f160fb9ebecb7ecfa2814c111a65c6b3d"} Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.173800 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2p8vn" event={"ID":"a0364b29-2456-4ccd-8b62-0374c2c8959c","Type":"ContainerDied","Data":"36323cb9913bb8bbf213a986f8a7c83a5449bd4135e8f0e0408674976bbe6ae6"} Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.173819 5118 scope.go:117] "RemoveContainer" containerID="e1f2d1fc1d02534e3bfe2cd9f9fc763f160fb9ebecb7ecfa2814c111a65c6b3d" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.173975 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2p8vn" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.190380 5118 scope.go:117] "RemoveContainer" containerID="a92038cd1b672310e2d666a93a48d670a43d02a0a399a6599aadae2bf034650b" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.197819 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-gszs6"] Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.198567 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0364b29-2456-4ccd-8b62-0374c2c8959c" containerName="extract-utilities" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.198589 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0364b29-2456-4ccd-8b62-0374c2c8959c" containerName="extract-utilities" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.198614 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0364b29-2456-4ccd-8b62-0374c2c8959c" containerName="registry-server" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.198622 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0364b29-2456-4ccd-8b62-0374c2c8959c" containerName="registry-server" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.198638 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0364b29-2456-4ccd-8b62-0374c2c8959c" containerName="extract-content" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.198646 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0364b29-2456-4ccd-8b62-0374c2c8959c" containerName="extract-content" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.198775 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="a0364b29-2456-4ccd-8b62-0374c2c8959c" containerName="registry-server" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.212102 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2p8vn"] Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.212145 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2p8vn"] Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.212280 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.214838 5118 scope.go:117] "RemoveContainer" containerID="44da1abb0fe7c8889eae4e01e3bb2f8adfbec07f3c22bcc7d3631bb6be968e07" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.231106 5118 scope.go:117] "RemoveContainer" containerID="e1f2d1fc1d02534e3bfe2cd9f9fc763f160fb9ebecb7ecfa2814c111a65c6b3d" Dec 08 19:40:36 crc kubenswrapper[5118]: E1208 19:40:36.231561 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1f2d1fc1d02534e3bfe2cd9f9fc763f160fb9ebecb7ecfa2814c111a65c6b3d\": container with ID starting with e1f2d1fc1d02534e3bfe2cd9f9fc763f160fb9ebecb7ecfa2814c111a65c6b3d not found: ID does not exist" containerID="e1f2d1fc1d02534e3bfe2cd9f9fc763f160fb9ebecb7ecfa2814c111a65c6b3d" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.231607 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1f2d1fc1d02534e3bfe2cd9f9fc763f160fb9ebecb7ecfa2814c111a65c6b3d"} err="failed to get container status \"e1f2d1fc1d02534e3bfe2cd9f9fc763f160fb9ebecb7ecfa2814c111a65c6b3d\": rpc error: code = NotFound desc = could not find container \"e1f2d1fc1d02534e3bfe2cd9f9fc763f160fb9ebecb7ecfa2814c111a65c6b3d\": container with ID starting with e1f2d1fc1d02534e3bfe2cd9f9fc763f160fb9ebecb7ecfa2814c111a65c6b3d not found: ID does not exist" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.231634 5118 scope.go:117] "RemoveContainer" containerID="a92038cd1b672310e2d666a93a48d670a43d02a0a399a6599aadae2bf034650b" Dec 08 19:40:36 crc kubenswrapper[5118]: E1208 19:40:36.231973 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a92038cd1b672310e2d666a93a48d670a43d02a0a399a6599aadae2bf034650b\": container with ID starting with a92038cd1b672310e2d666a93a48d670a43d02a0a399a6599aadae2bf034650b not found: ID does not exist" containerID="a92038cd1b672310e2d666a93a48d670a43d02a0a399a6599aadae2bf034650b" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.232025 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a92038cd1b672310e2d666a93a48d670a43d02a0a399a6599aadae2bf034650b"} err="failed to get container status \"a92038cd1b672310e2d666a93a48d670a43d02a0a399a6599aadae2bf034650b\": rpc error: code = NotFound desc = could not find container \"a92038cd1b672310e2d666a93a48d670a43d02a0a399a6599aadae2bf034650b\": container with ID starting with a92038cd1b672310e2d666a93a48d670a43d02a0a399a6599aadae2bf034650b not found: ID does not exist" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.232050 5118 scope.go:117] "RemoveContainer" containerID="44da1abb0fe7c8889eae4e01e3bb2f8adfbec07f3c22bcc7d3631bb6be968e07" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.232166 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-gszs6"] Dec 08 19:40:36 crc kubenswrapper[5118]: E1208 19:40:36.232334 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44da1abb0fe7c8889eae4e01e3bb2f8adfbec07f3c22bcc7d3631bb6be968e07\": container with ID starting with 44da1abb0fe7c8889eae4e01e3bb2f8adfbec07f3c22bcc7d3631bb6be968e07 not found: ID does not exist" containerID="44da1abb0fe7c8889eae4e01e3bb2f8adfbec07f3c22bcc7d3631bb6be968e07" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.232356 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44da1abb0fe7c8889eae4e01e3bb2f8adfbec07f3c22bcc7d3631bb6be968e07"} err="failed to get container status \"44da1abb0fe7c8889eae4e01e3bb2f8adfbec07f3c22bcc7d3631bb6be968e07\": rpc error: code = NotFound desc = could not find container \"44da1abb0fe7c8889eae4e01e3bb2f8adfbec07f3c22bcc7d3631bb6be968e07\": container with ID starting with 44da1abb0fe7c8889eae4e01e3bb2f8adfbec07f3c22bcc7d3631bb6be968e07 not found: ID does not exist" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.357398 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.357460 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h46m6\" (UniqueName: \"kubernetes.io/projected/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-kube-api-access-h46m6\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.357492 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.357530 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-bound-sa-token\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.357569 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.357596 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-registry-certificates\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.357622 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-registry-tls\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.357658 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-trusted-ca\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.386157 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.458663 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.458764 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h46m6\" (UniqueName: \"kubernetes.io/projected/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-kube-api-access-h46m6\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.458856 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.458901 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-bound-sa-token\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.459147 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-registry-certificates\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.459191 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-registry-tls\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.459219 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-trusted-ca\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.459527 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.460723 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-trusted-ca\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.460919 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-registry-certificates\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.463989 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.464065 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-registry-tls\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.474138 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-bound-sa-token\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.474321 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h46m6\" (UniqueName: \"kubernetes.io/projected/ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3-kube-api-access-h46m6\") pod \"image-registry-5d9d95bf5b-gszs6\" (UID: \"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.535234 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:36 crc kubenswrapper[5118]: I1208 19:40:36.705795 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-gszs6"] Dec 08 19:40:37 crc kubenswrapper[5118]: I1208 19:40:37.180553 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" event={"ID":"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3","Type":"ContainerStarted","Data":"485e1765ccb91d4b7374fe20ba3f0e0643e8a254c8111394861d9a93b90301c4"} Dec 08 19:40:37 crc kubenswrapper[5118]: I1208 19:40:37.181439 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" event={"ID":"ddc6c2d6-dad2-44be-8162-3f48e3f0aaa3","Type":"ContainerStarted","Data":"ba1e1545ca9c2d91ebb1673cfd14e4125e8b759ce1476bee228d97b56206250f"} Dec 08 19:40:37 crc kubenswrapper[5118]: I1208 19:40:37.199787 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" podStartSLOduration=1.1997689280000001 podStartE2EDuration="1.199768928s" podCreationTimestamp="2025-12-08 19:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:40:37.198041711 +0000 UTC m=+689.490887188" watchObservedRunningTime="2025-12-08 19:40:37.199768928 +0000 UTC m=+689.492614385" Dec 08 19:40:38 crc kubenswrapper[5118]: I1208 19:40:38.101962 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0364b29-2456-4ccd-8b62-0374c2c8959c" path="/var/lib/kubelet/pods/a0364b29-2456-4ccd-8b62-0374c2c8959c/volumes" Dec 08 19:40:38 crc kubenswrapper[5118]: I1208 19:40:38.186783 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:39 crc kubenswrapper[5118]: I1208 19:40:39.034327 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v"] Dec 08 19:40:39 crc kubenswrapper[5118]: I1208 19:40:39.044052 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v" Dec 08 19:40:39 crc kubenswrapper[5118]: I1208 19:40:39.044959 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v"] Dec 08 19:40:39 crc kubenswrapper[5118]: I1208 19:40:39.047627 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 08 19:40:39 crc kubenswrapper[5118]: I1208 19:40:39.103676 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcrmq\" (UniqueName: \"kubernetes.io/projected/514ddafa-98d6-4802-b777-340563a550f5-kube-api-access-rcrmq\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v\" (UID: \"514ddafa-98d6-4802-b777-340563a550f5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v" Dec 08 19:40:39 crc kubenswrapper[5118]: I1208 19:40:39.103959 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/514ddafa-98d6-4802-b777-340563a550f5-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v\" (UID: \"514ddafa-98d6-4802-b777-340563a550f5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v" Dec 08 19:40:39 crc kubenswrapper[5118]: I1208 19:40:39.104170 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/514ddafa-98d6-4802-b777-340563a550f5-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v\" (UID: \"514ddafa-98d6-4802-b777-340563a550f5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v" Dec 08 19:40:39 crc kubenswrapper[5118]: I1208 19:40:39.205219 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rcrmq\" (UniqueName: \"kubernetes.io/projected/514ddafa-98d6-4802-b777-340563a550f5-kube-api-access-rcrmq\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v\" (UID: \"514ddafa-98d6-4802-b777-340563a550f5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v" Dec 08 19:40:39 crc kubenswrapper[5118]: I1208 19:40:39.205264 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/514ddafa-98d6-4802-b777-340563a550f5-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v\" (UID: \"514ddafa-98d6-4802-b777-340563a550f5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v" Dec 08 19:40:39 crc kubenswrapper[5118]: I1208 19:40:39.205299 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/514ddafa-98d6-4802-b777-340563a550f5-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v\" (UID: \"514ddafa-98d6-4802-b777-340563a550f5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v" Dec 08 19:40:39 crc kubenswrapper[5118]: I1208 19:40:39.206079 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/514ddafa-98d6-4802-b777-340563a550f5-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v\" (UID: \"514ddafa-98d6-4802-b777-340563a550f5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v" Dec 08 19:40:39 crc kubenswrapper[5118]: I1208 19:40:39.206134 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/514ddafa-98d6-4802-b777-340563a550f5-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v\" (UID: \"514ddafa-98d6-4802-b777-340563a550f5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v" Dec 08 19:40:39 crc kubenswrapper[5118]: I1208 19:40:39.229149 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcrmq\" (UniqueName: \"kubernetes.io/projected/514ddafa-98d6-4802-b777-340563a550f5-kube-api-access-rcrmq\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v\" (UID: \"514ddafa-98d6-4802-b777-340563a550f5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v" Dec 08 19:40:39 crc kubenswrapper[5118]: I1208 19:40:39.364787 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v" Dec 08 19:40:39 crc kubenswrapper[5118]: I1208 19:40:39.770075 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v"] Dec 08 19:40:40 crc kubenswrapper[5118]: I1208 19:40:40.197896 5118 generic.go:358] "Generic (PLEG): container finished" podID="514ddafa-98d6-4802-b777-340563a550f5" containerID="6b7cd219717b9ef9a3f8f6ae5aa5d82088d91b82657f9baa0a8d513e1479e129" exitCode=0 Dec 08 19:40:40 crc kubenswrapper[5118]: I1208 19:40:40.198187 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v" event={"ID":"514ddafa-98d6-4802-b777-340563a550f5","Type":"ContainerDied","Data":"6b7cd219717b9ef9a3f8f6ae5aa5d82088d91b82657f9baa0a8d513e1479e129"} Dec 08 19:40:40 crc kubenswrapper[5118]: I1208 19:40:40.198578 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v" event={"ID":"514ddafa-98d6-4802-b777-340563a550f5","Type":"ContainerStarted","Data":"7db5e67a68b44d1f97e7b71ecb939097a7eba27d2971c018f758edd6b3244de7"} Dec 08 19:40:41 crc kubenswrapper[5118]: I1208 19:40:41.206032 5118 generic.go:358] "Generic (PLEG): container finished" podID="514ddafa-98d6-4802-b777-340563a550f5" containerID="f765eb7354ccd1ea1362f1d3494cf4be65ea4b2c956811c712a76e186694321b" exitCode=0 Dec 08 19:40:41 crc kubenswrapper[5118]: I1208 19:40:41.206093 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v" event={"ID":"514ddafa-98d6-4802-b777-340563a550f5","Type":"ContainerDied","Data":"f765eb7354ccd1ea1362f1d3494cf4be65ea4b2c956811c712a76e186694321b"} Dec 08 19:40:42 crc kubenswrapper[5118]: I1208 19:40:42.214254 5118 generic.go:358] "Generic (PLEG): container finished" podID="514ddafa-98d6-4802-b777-340563a550f5" containerID="f0817b3dbf482afd6d985694687dbbc1c045c52ee4d5326ec77ce4ed6b6e015b" exitCode=0 Dec 08 19:40:42 crc kubenswrapper[5118]: I1208 19:40:42.214328 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v" event={"ID":"514ddafa-98d6-4802-b777-340563a550f5","Type":"ContainerDied","Data":"f0817b3dbf482afd6d985694687dbbc1c045c52ee4d5326ec77ce4ed6b6e015b"} Dec 08 19:40:43 crc kubenswrapper[5118]: I1208 19:40:43.456980 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v" Dec 08 19:40:43 crc kubenswrapper[5118]: I1208 19:40:43.567085 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/514ddafa-98d6-4802-b777-340563a550f5-bundle\") pod \"514ddafa-98d6-4802-b777-340563a550f5\" (UID: \"514ddafa-98d6-4802-b777-340563a550f5\") " Dec 08 19:40:43 crc kubenswrapper[5118]: I1208 19:40:43.567213 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcrmq\" (UniqueName: \"kubernetes.io/projected/514ddafa-98d6-4802-b777-340563a550f5-kube-api-access-rcrmq\") pod \"514ddafa-98d6-4802-b777-340563a550f5\" (UID: \"514ddafa-98d6-4802-b777-340563a550f5\") " Dec 08 19:40:43 crc kubenswrapper[5118]: I1208 19:40:43.567249 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/514ddafa-98d6-4802-b777-340563a550f5-util\") pod \"514ddafa-98d6-4802-b777-340563a550f5\" (UID: \"514ddafa-98d6-4802-b777-340563a550f5\") " Dec 08 19:40:43 crc kubenswrapper[5118]: I1208 19:40:43.569952 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/514ddafa-98d6-4802-b777-340563a550f5-bundle" (OuterVolumeSpecName: "bundle") pod "514ddafa-98d6-4802-b777-340563a550f5" (UID: "514ddafa-98d6-4802-b777-340563a550f5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:40:43 crc kubenswrapper[5118]: I1208 19:40:43.573739 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/514ddafa-98d6-4802-b777-340563a550f5-kube-api-access-rcrmq" (OuterVolumeSpecName: "kube-api-access-rcrmq") pod "514ddafa-98d6-4802-b777-340563a550f5" (UID: "514ddafa-98d6-4802-b777-340563a550f5"). InnerVolumeSpecName "kube-api-access-rcrmq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:40:43 crc kubenswrapper[5118]: I1208 19:40:43.587019 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/514ddafa-98d6-4802-b777-340563a550f5-util" (OuterVolumeSpecName: "util") pod "514ddafa-98d6-4802-b777-340563a550f5" (UID: "514ddafa-98d6-4802-b777-340563a550f5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:40:43 crc kubenswrapper[5118]: I1208 19:40:43.668481 5118 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/514ddafa-98d6-4802-b777-340563a550f5-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:43 crc kubenswrapper[5118]: I1208 19:40:43.668773 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rcrmq\" (UniqueName: \"kubernetes.io/projected/514ddafa-98d6-4802-b777-340563a550f5-kube-api-access-rcrmq\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:43 crc kubenswrapper[5118]: I1208 19:40:43.668847 5118 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/514ddafa-98d6-4802-b777-340563a550f5-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:44 crc kubenswrapper[5118]: I1208 19:40:44.228344 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v" event={"ID":"514ddafa-98d6-4802-b777-340563a550f5","Type":"ContainerDied","Data":"7db5e67a68b44d1f97e7b71ecb939097a7eba27d2971c018f758edd6b3244de7"} Dec 08 19:40:44 crc kubenswrapper[5118]: I1208 19:40:44.228380 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7db5e67a68b44d1f97e7b71ecb939097a7eba27d2971c018f758edd6b3244de7" Dec 08 19:40:44 crc kubenswrapper[5118]: I1208 19:40:44.228823 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hzt6v" Dec 08 19:40:45 crc kubenswrapper[5118]: I1208 19:40:45.809874 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf"] Dec 08 19:40:45 crc kubenswrapper[5118]: I1208 19:40:45.810840 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="514ddafa-98d6-4802-b777-340563a550f5" containerName="extract" Dec 08 19:40:45 crc kubenswrapper[5118]: I1208 19:40:45.810860 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="514ddafa-98d6-4802-b777-340563a550f5" containerName="extract" Dec 08 19:40:45 crc kubenswrapper[5118]: I1208 19:40:45.810882 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="514ddafa-98d6-4802-b777-340563a550f5" containerName="pull" Dec 08 19:40:45 crc kubenswrapper[5118]: I1208 19:40:45.810889 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="514ddafa-98d6-4802-b777-340563a550f5" containerName="pull" Dec 08 19:40:45 crc kubenswrapper[5118]: I1208 19:40:45.810914 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="514ddafa-98d6-4802-b777-340563a550f5" containerName="util" Dec 08 19:40:45 crc kubenswrapper[5118]: I1208 19:40:45.810922 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="514ddafa-98d6-4802-b777-340563a550f5" containerName="util" Dec 08 19:40:45 crc kubenswrapper[5118]: I1208 19:40:45.811059 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="514ddafa-98d6-4802-b777-340563a550f5" containerName="extract" Dec 08 19:40:45 crc kubenswrapper[5118]: I1208 19:40:45.845271 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf"] Dec 08 19:40:45 crc kubenswrapper[5118]: I1208 19:40:45.845422 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf" Dec 08 19:40:45 crc kubenswrapper[5118]: I1208 19:40:45.847825 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 08 19:40:45 crc kubenswrapper[5118]: I1208 19:40:45.999506 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/09bca8f9-0a5a-4b35-a8f7-fa743e62245e-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf\" (UID: \"09bca8f9-0a5a-4b35-a8f7-fa743e62245e\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf" Dec 08 19:40:45 crc kubenswrapper[5118]: I1208 19:40:45.999576 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxcnc\" (UniqueName: \"kubernetes.io/projected/09bca8f9-0a5a-4b35-a8f7-fa743e62245e-kube-api-access-gxcnc\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf\" (UID: \"09bca8f9-0a5a-4b35-a8f7-fa743e62245e\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf" Dec 08 19:40:45 crc kubenswrapper[5118]: I1208 19:40:45.999677 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/09bca8f9-0a5a-4b35-a8f7-fa743e62245e-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf\" (UID: \"09bca8f9-0a5a-4b35-a8f7-fa743e62245e\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf" Dec 08 19:40:46 crc kubenswrapper[5118]: I1208 19:40:46.100902 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/09bca8f9-0a5a-4b35-a8f7-fa743e62245e-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf\" (UID: \"09bca8f9-0a5a-4b35-a8f7-fa743e62245e\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf" Dec 08 19:40:46 crc kubenswrapper[5118]: I1208 19:40:46.100984 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gxcnc\" (UniqueName: \"kubernetes.io/projected/09bca8f9-0a5a-4b35-a8f7-fa743e62245e-kube-api-access-gxcnc\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf\" (UID: \"09bca8f9-0a5a-4b35-a8f7-fa743e62245e\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf" Dec 08 19:40:46 crc kubenswrapper[5118]: I1208 19:40:46.101109 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/09bca8f9-0a5a-4b35-a8f7-fa743e62245e-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf\" (UID: \"09bca8f9-0a5a-4b35-a8f7-fa743e62245e\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf" Dec 08 19:40:46 crc kubenswrapper[5118]: I1208 19:40:46.101631 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/09bca8f9-0a5a-4b35-a8f7-fa743e62245e-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf\" (UID: \"09bca8f9-0a5a-4b35-a8f7-fa743e62245e\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf" Dec 08 19:40:46 crc kubenswrapper[5118]: I1208 19:40:46.102015 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/09bca8f9-0a5a-4b35-a8f7-fa743e62245e-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf\" (UID: \"09bca8f9-0a5a-4b35-a8f7-fa743e62245e\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf" Dec 08 19:40:46 crc kubenswrapper[5118]: I1208 19:40:46.123254 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxcnc\" (UniqueName: \"kubernetes.io/projected/09bca8f9-0a5a-4b35-a8f7-fa743e62245e-kube-api-access-gxcnc\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf\" (UID: \"09bca8f9-0a5a-4b35-a8f7-fa743e62245e\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf" Dec 08 19:40:46 crc kubenswrapper[5118]: I1208 19:40:46.165842 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf" Dec 08 19:40:46 crc kubenswrapper[5118]: I1208 19:40:46.599145 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf"] Dec 08 19:40:47 crc kubenswrapper[5118]: I1208 19:40:47.243428 5118 generic.go:358] "Generic (PLEG): container finished" podID="09bca8f9-0a5a-4b35-a8f7-fa743e62245e" containerID="97acdbc8a61554db142941400a185e535d348d36a049e84f3c641eb6df1ba283" exitCode=0 Dec 08 19:40:47 crc kubenswrapper[5118]: I1208 19:40:47.243472 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf" event={"ID":"09bca8f9-0a5a-4b35-a8f7-fa743e62245e","Type":"ContainerDied","Data":"97acdbc8a61554db142941400a185e535d348d36a049e84f3c641eb6df1ba283"} Dec 08 19:40:47 crc kubenswrapper[5118]: I1208 19:40:47.243833 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf" event={"ID":"09bca8f9-0a5a-4b35-a8f7-fa743e62245e","Type":"ContainerStarted","Data":"42e343309614bf6d2e59e3389a7e4a6fcfa1e745d1deb1ba3d93ce952cd23d45"} Dec 08 19:40:48 crc kubenswrapper[5118]: I1208 19:40:48.251065 5118 generic.go:358] "Generic (PLEG): container finished" podID="09bca8f9-0a5a-4b35-a8f7-fa743e62245e" containerID="50701b62ffdef613990c1e1a2abe61ea69f0af3dd2342b52fdfcce448c2f02b6" exitCode=0 Dec 08 19:40:48 crc kubenswrapper[5118]: I1208 19:40:48.251114 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf" event={"ID":"09bca8f9-0a5a-4b35-a8f7-fa743e62245e","Type":"ContainerDied","Data":"50701b62ffdef613990c1e1a2abe61ea69f0af3dd2342b52fdfcce448c2f02b6"} Dec 08 19:40:49 crc kubenswrapper[5118]: I1208 19:40:49.258672 5118 generic.go:358] "Generic (PLEG): container finished" podID="09bca8f9-0a5a-4b35-a8f7-fa743e62245e" containerID="bea2aa861f379ab89dac487e54af66b1703398f97d30efe90819562ad0dc78f1" exitCode=0 Dec 08 19:40:49 crc kubenswrapper[5118]: I1208 19:40:49.258746 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf" event={"ID":"09bca8f9-0a5a-4b35-a8f7-fa743e62245e","Type":"ContainerDied","Data":"bea2aa861f379ab89dac487e54af66b1703398f97d30efe90819562ad0dc78f1"} Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.612576 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.661097 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/09bca8f9-0a5a-4b35-a8f7-fa743e62245e-util\") pod \"09bca8f9-0a5a-4b35-a8f7-fa743e62245e\" (UID: \"09bca8f9-0a5a-4b35-a8f7-fa743e62245e\") " Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.661218 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxcnc\" (UniqueName: \"kubernetes.io/projected/09bca8f9-0a5a-4b35-a8f7-fa743e62245e-kube-api-access-gxcnc\") pod \"09bca8f9-0a5a-4b35-a8f7-fa743e62245e\" (UID: \"09bca8f9-0a5a-4b35-a8f7-fa743e62245e\") " Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.661313 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/09bca8f9-0a5a-4b35-a8f7-fa743e62245e-bundle\") pod \"09bca8f9-0a5a-4b35-a8f7-fa743e62245e\" (UID: \"09bca8f9-0a5a-4b35-a8f7-fa743e62245e\") " Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.662214 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09bca8f9-0a5a-4b35-a8f7-fa743e62245e-bundle" (OuterVolumeSpecName: "bundle") pod "09bca8f9-0a5a-4b35-a8f7-fa743e62245e" (UID: "09bca8f9-0a5a-4b35-a8f7-fa743e62245e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.670945 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09bca8f9-0a5a-4b35-a8f7-fa743e62245e-kube-api-access-gxcnc" (OuterVolumeSpecName: "kube-api-access-gxcnc") pod "09bca8f9-0a5a-4b35-a8f7-fa743e62245e" (UID: "09bca8f9-0a5a-4b35-a8f7-fa743e62245e"). InnerVolumeSpecName "kube-api-access-gxcnc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.683869 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09bca8f9-0a5a-4b35-a8f7-fa743e62245e-util" (OuterVolumeSpecName: "util") pod "09bca8f9-0a5a-4b35-a8f7-fa743e62245e" (UID: "09bca8f9-0a5a-4b35-a8f7-fa743e62245e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.698507 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq"] Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.699254 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="09bca8f9-0a5a-4b35-a8f7-fa743e62245e" containerName="util" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.699281 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="09bca8f9-0a5a-4b35-a8f7-fa743e62245e" containerName="util" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.699312 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="09bca8f9-0a5a-4b35-a8f7-fa743e62245e" containerName="extract" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.699321 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="09bca8f9-0a5a-4b35-a8f7-fa743e62245e" containerName="extract" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.699331 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="09bca8f9-0a5a-4b35-a8f7-fa743e62245e" containerName="pull" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.699340 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="09bca8f9-0a5a-4b35-a8f7-fa743e62245e" containerName="pull" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.699467 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="09bca8f9-0a5a-4b35-a8f7-fa743e62245e" containerName="extract" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.715034 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.739249 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq"] Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.762602 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9b13bede-3571-4763-a7ba-55f8be80930d-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq\" (UID: \"9b13bede-3571-4763-a7ba-55f8be80930d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.762656 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9b13bede-3571-4763-a7ba-55f8be80930d-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq\" (UID: \"9b13bede-3571-4763-a7ba-55f8be80930d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.762740 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhdql\" (UniqueName: \"kubernetes.io/projected/9b13bede-3571-4763-a7ba-55f8be80930d-kube-api-access-qhdql\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq\" (UID: \"9b13bede-3571-4763-a7ba-55f8be80930d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.762781 5118 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/09bca8f9-0a5a-4b35-a8f7-fa743e62245e-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.762792 5118 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/09bca8f9-0a5a-4b35-a8f7-fa743e62245e-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.762802 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gxcnc\" (UniqueName: \"kubernetes.io/projected/09bca8f9-0a5a-4b35-a8f7-fa743e62245e-kube-api-access-gxcnc\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.863383 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9b13bede-3571-4763-a7ba-55f8be80930d-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq\" (UID: \"9b13bede-3571-4763-a7ba-55f8be80930d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.863437 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9b13bede-3571-4763-a7ba-55f8be80930d-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq\" (UID: \"9b13bede-3571-4763-a7ba-55f8be80930d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.863498 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qhdql\" (UniqueName: \"kubernetes.io/projected/9b13bede-3571-4763-a7ba-55f8be80930d-kube-api-access-qhdql\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq\" (UID: \"9b13bede-3571-4763-a7ba-55f8be80930d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.864278 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9b13bede-3571-4763-a7ba-55f8be80930d-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq\" (UID: \"9b13bede-3571-4763-a7ba-55f8be80930d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.864501 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9b13bede-3571-4763-a7ba-55f8be80930d-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq\" (UID: \"9b13bede-3571-4763-a7ba-55f8be80930d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq" Dec 08 19:40:50 crc kubenswrapper[5118]: I1208 19:40:50.884031 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhdql\" (UniqueName: \"kubernetes.io/projected/9b13bede-3571-4763-a7ba-55f8be80930d-kube-api-access-qhdql\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq\" (UID: \"9b13bede-3571-4763-a7ba-55f8be80930d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq" Dec 08 19:40:51 crc kubenswrapper[5118]: I1208 19:40:51.041332 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq" Dec 08 19:40:51 crc kubenswrapper[5118]: I1208 19:40:51.279243 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf" event={"ID":"09bca8f9-0a5a-4b35-a8f7-fa743e62245e","Type":"ContainerDied","Data":"42e343309614bf6d2e59e3389a7e4a6fcfa1e745d1deb1ba3d93ce952cd23d45"} Dec 08 19:40:51 crc kubenswrapper[5118]: I1208 19:40:51.279484 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42e343309614bf6d2e59e3389a7e4a6fcfa1e745d1deb1ba3d93ce952cd23d45" Dec 08 19:40:51 crc kubenswrapper[5118]: I1208 19:40:51.279570 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e85skf" Dec 08 19:40:51 crc kubenswrapper[5118]: I1208 19:40:51.542782 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq"] Dec 08 19:40:51 crc kubenswrapper[5118]: W1208 19:40:51.547590 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b13bede_3571_4763_a7ba_55f8be80930d.slice/crio-e71ec9a6f727d6a05d64114bf2c8232e997d0cc8aa5787772d34e04a260ab451 WatchSource:0}: Error finding container e71ec9a6f727d6a05d64114bf2c8232e997d0cc8aa5787772d34e04a260ab451: Status 404 returned error can't find the container with id e71ec9a6f727d6a05d64114bf2c8232e997d0cc8aa5787772d34e04a260ab451 Dec 08 19:40:52 crc kubenswrapper[5118]: I1208 19:40:52.286344 5118 generic.go:358] "Generic (PLEG): container finished" podID="9b13bede-3571-4763-a7ba-55f8be80930d" containerID="1708bc4c4c10b7366be4487ac7725b6f18261bfa1c7932f638004a07ebbd419b" exitCode=0 Dec 08 19:40:52 crc kubenswrapper[5118]: I1208 19:40:52.286433 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq" event={"ID":"9b13bede-3571-4763-a7ba-55f8be80930d","Type":"ContainerDied","Data":"1708bc4c4c10b7366be4487ac7725b6f18261bfa1c7932f638004a07ebbd419b"} Dec 08 19:40:52 crc kubenswrapper[5118]: I1208 19:40:52.286765 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq" event={"ID":"9b13bede-3571-4763-a7ba-55f8be80930d","Type":"ContainerStarted","Data":"e71ec9a6f727d6a05d64114bf2c8232e997d0cc8aa5787772d34e04a260ab451"} Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.676748 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-qw8pf"] Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.680426 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-qw8pf" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.682990 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.683821 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-4cz6w\"" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.690501 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-qw8pf"] Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.707951 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-s2bkk"] Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.713813 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-s2bkk" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.716485 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-b6fj4\"" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.716766 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.718916 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-s2bkk"] Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.726070 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-rl29g"] Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.730258 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-rl29g" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.732585 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2pvk\" (UniqueName: \"kubernetes.io/projected/49800b56-28af-432c-8b8e-68f8fb223895-kube-api-access-p2pvk\") pod \"obo-prometheus-operator-86648f486b-qw8pf\" (UID: \"49800b56-28af-432c-8b8e-68f8fb223895\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-qw8pf" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.732670 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4bd890c0-c2f1-4cac-aaea-a4c79efedc11-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75ddbddc5c-s2bkk\" (UID: \"4bd890c0-c2f1-4cac-aaea-a4c79efedc11\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-s2bkk" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.732756 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4bd890c0-c2f1-4cac-aaea-a4c79efedc11-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75ddbddc5c-s2bkk\" (UID: \"4bd890c0-c2f1-4cac-aaea-a4c79efedc11\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-s2bkk" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.738359 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.762696 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-rl29g"] Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.838363 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f50a79c-af0a-4520-9c18-2c686373e86e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75ddbddc5c-rl29g\" (UID: \"0f50a79c-af0a-4520-9c18-2c686373e86e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-rl29g" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.838481 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p2pvk\" (UniqueName: \"kubernetes.io/projected/49800b56-28af-432c-8b8e-68f8fb223895-kube-api-access-p2pvk\") pod \"obo-prometheus-operator-86648f486b-qw8pf\" (UID: \"49800b56-28af-432c-8b8e-68f8fb223895\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-qw8pf" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.838547 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4bd890c0-c2f1-4cac-aaea-a4c79efedc11-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75ddbddc5c-s2bkk\" (UID: \"4bd890c0-c2f1-4cac-aaea-a4c79efedc11\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-s2bkk" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.838575 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f50a79c-af0a-4520-9c18-2c686373e86e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75ddbddc5c-rl29g\" (UID: \"0f50a79c-af0a-4520-9c18-2c686373e86e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-rl29g" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.838607 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4bd890c0-c2f1-4cac-aaea-a4c79efedc11-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75ddbddc5c-s2bkk\" (UID: \"4bd890c0-c2f1-4cac-aaea-a4c79efedc11\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-s2bkk" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.848168 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4bd890c0-c2f1-4cac-aaea-a4c79efedc11-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75ddbddc5c-s2bkk\" (UID: \"4bd890c0-c2f1-4cac-aaea-a4c79efedc11\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-s2bkk" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.848172 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4bd890c0-c2f1-4cac-aaea-a4c79efedc11-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75ddbddc5c-s2bkk\" (UID: \"4bd890c0-c2f1-4cac-aaea-a4c79efedc11\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-s2bkk" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.868433 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2pvk\" (UniqueName: \"kubernetes.io/projected/49800b56-28af-432c-8b8e-68f8fb223895-kube-api-access-p2pvk\") pod \"obo-prometheus-operator-86648f486b-qw8pf\" (UID: \"49800b56-28af-432c-8b8e-68f8fb223895\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-qw8pf" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.935753 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-jlh8f"] Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.939737 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f50a79c-af0a-4520-9c18-2c686373e86e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75ddbddc5c-rl29g\" (UID: \"0f50a79c-af0a-4520-9c18-2c686373e86e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-rl29g" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.939820 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f50a79c-af0a-4520-9c18-2c686373e86e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75ddbddc5c-rl29g\" (UID: \"0f50a79c-af0a-4520-9c18-2c686373e86e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-rl29g" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.944137 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f50a79c-af0a-4520-9c18-2c686373e86e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75ddbddc5c-rl29g\" (UID: \"0f50a79c-af0a-4520-9c18-2c686373e86e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-rl29g" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.945156 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-jlh8f" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.956113 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.956514 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-kmqw4\"" Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.960026 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-jlh8f"] Dec 08 19:40:55 crc kubenswrapper[5118]: I1208 19:40:55.960282 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f50a79c-af0a-4520-9c18-2c686373e86e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75ddbddc5c-rl29g\" (UID: \"0f50a79c-af0a-4520-9c18-2c686373e86e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-rl29g" Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.033448 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-qw8pf" Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.041326 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f099b829-f9f7-4c78-b52c-a079207ebca8-observability-operator-tls\") pod \"observability-operator-78c97476f4-jlh8f\" (UID: \"f099b829-f9f7-4c78-b52c-a079207ebca8\") " pod="openshift-operators/observability-operator-78c97476f4-jlh8f" Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.041380 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp7gc\" (UniqueName: \"kubernetes.io/projected/f099b829-f9f7-4c78-b52c-a079207ebca8-kube-api-access-vp7gc\") pod \"observability-operator-78c97476f4-jlh8f\" (UID: \"f099b829-f9f7-4c78-b52c-a079207ebca8\") " pod="openshift-operators/observability-operator-78c97476f4-jlh8f" Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.107082 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-cddjw"] Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.128039 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-s2bkk" Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.130857 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-cddjw"] Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.131048 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-cddjw" Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.134636 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-dmst9\"" Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.144440 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f099b829-f9f7-4c78-b52c-a079207ebca8-observability-operator-tls\") pod \"observability-operator-78c97476f4-jlh8f\" (UID: \"f099b829-f9f7-4c78-b52c-a079207ebca8\") " pod="openshift-operators/observability-operator-78c97476f4-jlh8f" Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.144541 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vp7gc\" (UniqueName: \"kubernetes.io/projected/f099b829-f9f7-4c78-b52c-a079207ebca8-kube-api-access-vp7gc\") pod \"observability-operator-78c97476f4-jlh8f\" (UID: \"f099b829-f9f7-4c78-b52c-a079207ebca8\") " pod="openshift-operators/observability-operator-78c97476f4-jlh8f" Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.148935 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-rl29g" Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.152866 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f099b829-f9f7-4c78-b52c-a079207ebca8-observability-operator-tls\") pod \"observability-operator-78c97476f4-jlh8f\" (UID: \"f099b829-f9f7-4c78-b52c-a079207ebca8\") " pod="openshift-operators/observability-operator-78c97476f4-jlh8f" Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.170595 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp7gc\" (UniqueName: \"kubernetes.io/projected/f099b829-f9f7-4c78-b52c-a079207ebca8-kube-api-access-vp7gc\") pod \"observability-operator-78c97476f4-jlh8f\" (UID: \"f099b829-f9f7-4c78-b52c-a079207ebca8\") " pod="openshift-operators/observability-operator-78c97476f4-jlh8f" Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.247930 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4dafe562-d7da-46d9-bef9-33bd0eb4e4ed-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-cddjw\" (UID: \"4dafe562-d7da-46d9-bef9-33bd0eb4e4ed\") " pod="openshift-operators/perses-operator-68bdb49cbf-cddjw" Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.247972 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7crtt\" (UniqueName: \"kubernetes.io/projected/4dafe562-d7da-46d9-bef9-33bd0eb4e4ed-kube-api-access-7crtt\") pod \"perses-operator-68bdb49cbf-cddjw\" (UID: \"4dafe562-d7da-46d9-bef9-33bd0eb4e4ed\") " pod="openshift-operators/perses-operator-68bdb49cbf-cddjw" Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.286134 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-jlh8f" Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.333076 5118 generic.go:358] "Generic (PLEG): container finished" podID="9b13bede-3571-4763-a7ba-55f8be80930d" containerID="ec44868b40743341aad960d1b235977c882ec6c0966cc00eebe56ffb1c5f19ac" exitCode=0 Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.333680 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq" event={"ID":"9b13bede-3571-4763-a7ba-55f8be80930d","Type":"ContainerDied","Data":"ec44868b40743341aad960d1b235977c882ec6c0966cc00eebe56ffb1c5f19ac"} Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.344007 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-qw8pf"] Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.356436 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4dafe562-d7da-46d9-bef9-33bd0eb4e4ed-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-cddjw\" (UID: \"4dafe562-d7da-46d9-bef9-33bd0eb4e4ed\") " pod="openshift-operators/perses-operator-68bdb49cbf-cddjw" Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.356480 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7crtt\" (UniqueName: \"kubernetes.io/projected/4dafe562-d7da-46d9-bef9-33bd0eb4e4ed-kube-api-access-7crtt\") pod \"perses-operator-68bdb49cbf-cddjw\" (UID: \"4dafe562-d7da-46d9-bef9-33bd0eb4e4ed\") " pod="openshift-operators/perses-operator-68bdb49cbf-cddjw" Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.362057 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4dafe562-d7da-46d9-bef9-33bd0eb4e4ed-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-cddjw\" (UID: \"4dafe562-d7da-46d9-bef9-33bd0eb4e4ed\") " pod="openshift-operators/perses-operator-68bdb49cbf-cddjw" Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.386815 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7crtt\" (UniqueName: \"kubernetes.io/projected/4dafe562-d7da-46d9-bef9-33bd0eb4e4ed-kube-api-access-7crtt\") pod \"perses-operator-68bdb49cbf-cddjw\" (UID: \"4dafe562-d7da-46d9-bef9-33bd0eb4e4ed\") " pod="openshift-operators/perses-operator-68bdb49cbf-cddjw" Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.405896 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-s2bkk"] Dec 08 19:40:56 crc kubenswrapper[5118]: W1208 19:40:56.436374 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bd890c0_c2f1_4cac_aaea_a4c79efedc11.slice/crio-123e6e4772e948a0dd78c4c225c00a51c433353f3c3b508b9b70e41606f259d3 WatchSource:0}: Error finding container 123e6e4772e948a0dd78c4c225c00a51c433353f3c3b508b9b70e41606f259d3: Status 404 returned error can't find the container with id 123e6e4772e948a0dd78c4c225c00a51c433353f3c3b508b9b70e41606f259d3 Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.463850 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-cddjw" Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.527838 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-rl29g"] Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.636564 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-jlh8f"] Dec 08 19:40:56 crc kubenswrapper[5118]: W1208 19:40:56.657853 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf099b829_f9f7_4c78_b52c_a079207ebca8.slice/crio-b103ca4faeb112f6b858909de6785fd7f7617afca29ba68c4e53e9747aedb099 WatchSource:0}: Error finding container b103ca4faeb112f6b858909de6785fd7f7617afca29ba68c4e53e9747aedb099: Status 404 returned error can't find the container with id b103ca4faeb112f6b858909de6785fd7f7617afca29ba68c4e53e9747aedb099 Dec 08 19:40:56 crc kubenswrapper[5118]: I1208 19:40:56.941915 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-cddjw"] Dec 08 19:40:56 crc kubenswrapper[5118]: W1208 19:40:56.981594 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dafe562_d7da_46d9_bef9_33bd0eb4e4ed.slice/crio-ed646d25c9fc6d6395e05d81a3f745c119df8d2c56301cff2ada4025c5de80aa WatchSource:0}: Error finding container ed646d25c9fc6d6395e05d81a3f745c119df8d2c56301cff2ada4025c5de80aa: Status 404 returned error can't find the container with id ed646d25c9fc6d6395e05d81a3f745c119df8d2c56301cff2ada4025c5de80aa Dec 08 19:40:57 crc kubenswrapper[5118]: I1208 19:40:57.339650 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-s2bkk" event={"ID":"4bd890c0-c2f1-4cac-aaea-a4c79efedc11","Type":"ContainerStarted","Data":"123e6e4772e948a0dd78c4c225c00a51c433353f3c3b508b9b70e41606f259d3"} Dec 08 19:40:57 crc kubenswrapper[5118]: I1208 19:40:57.340743 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-cddjw" event={"ID":"4dafe562-d7da-46d9-bef9-33bd0eb4e4ed","Type":"ContainerStarted","Data":"ed646d25c9fc6d6395e05d81a3f745c119df8d2c56301cff2ada4025c5de80aa"} Dec 08 19:40:57 crc kubenswrapper[5118]: I1208 19:40:57.341597 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-qw8pf" event={"ID":"49800b56-28af-432c-8b8e-68f8fb223895","Type":"ContainerStarted","Data":"06c1f3a359e292bdba6375cfe4a5883a31039a3ce6679770d22dd21a76586f07"} Dec 08 19:40:57 crc kubenswrapper[5118]: I1208 19:40:57.342506 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-jlh8f" event={"ID":"f099b829-f9f7-4c78-b52c-a079207ebca8","Type":"ContainerStarted","Data":"b103ca4faeb112f6b858909de6785fd7f7617afca29ba68c4e53e9747aedb099"} Dec 08 19:40:57 crc kubenswrapper[5118]: I1208 19:40:57.343452 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-rl29g" event={"ID":"0f50a79c-af0a-4520-9c18-2c686373e86e","Type":"ContainerStarted","Data":"cd42d4fe0066606a86098cf5bed4c5cfd817caf276cec7b06ab9cb75d52791ff"} Dec 08 19:40:57 crc kubenswrapper[5118]: I1208 19:40:57.346443 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq" event={"ID":"9b13bede-3571-4763-a7ba-55f8be80930d","Type":"ContainerStarted","Data":"1c2568fa91320237b22627c4b5326e1c95faadf41d62d5df9c9f02ba0f587a7f"} Dec 08 19:40:57 crc kubenswrapper[5118]: I1208 19:40:57.373976 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq" podStartSLOduration=4.195974639 podStartE2EDuration="7.373956026s" podCreationTimestamp="2025-12-08 19:40:50 +0000 UTC" firstStartedPulling="2025-12-08 19:40:52.288541319 +0000 UTC m=+704.581386776" lastFinishedPulling="2025-12-08 19:40:55.466522706 +0000 UTC m=+707.759368163" observedRunningTime="2025-12-08 19:40:57.371413346 +0000 UTC m=+709.664258813" watchObservedRunningTime="2025-12-08 19:40:57.373956026 +0000 UTC m=+709.666801483" Dec 08 19:40:58 crc kubenswrapper[5118]: I1208 19:40:58.368191 5118 generic.go:358] "Generic (PLEG): container finished" podID="9b13bede-3571-4763-a7ba-55f8be80930d" containerID="1c2568fa91320237b22627c4b5326e1c95faadf41d62d5df9c9f02ba0f587a7f" exitCode=0 Dec 08 19:40:58 crc kubenswrapper[5118]: I1208 19:40:58.368435 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq" event={"ID":"9b13bede-3571-4763-a7ba-55f8be80930d","Type":"ContainerDied","Data":"1c2568fa91320237b22627c4b5326e1c95faadf41d62d5df9c9f02ba0f587a7f"} Dec 08 19:40:59 crc kubenswrapper[5118]: I1208 19:40:59.197481 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-gszs6" Dec 08 19:40:59 crc kubenswrapper[5118]: I1208 19:40:59.264315 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-k49rf"] Dec 08 19:40:59 crc kubenswrapper[5118]: I1208 19:40:59.857577 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq" Dec 08 19:40:59 crc kubenswrapper[5118]: I1208 19:40:59.927702 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhdql\" (UniqueName: \"kubernetes.io/projected/9b13bede-3571-4763-a7ba-55f8be80930d-kube-api-access-qhdql\") pod \"9b13bede-3571-4763-a7ba-55f8be80930d\" (UID: \"9b13bede-3571-4763-a7ba-55f8be80930d\") " Dec 08 19:40:59 crc kubenswrapper[5118]: I1208 19:40:59.927819 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9b13bede-3571-4763-a7ba-55f8be80930d-bundle\") pod \"9b13bede-3571-4763-a7ba-55f8be80930d\" (UID: \"9b13bede-3571-4763-a7ba-55f8be80930d\") " Dec 08 19:40:59 crc kubenswrapper[5118]: I1208 19:40:59.928003 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9b13bede-3571-4763-a7ba-55f8be80930d-util\") pod \"9b13bede-3571-4763-a7ba-55f8be80930d\" (UID: \"9b13bede-3571-4763-a7ba-55f8be80930d\") " Dec 08 19:40:59 crc kubenswrapper[5118]: I1208 19:40:59.929977 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b13bede-3571-4763-a7ba-55f8be80930d-bundle" (OuterVolumeSpecName: "bundle") pod "9b13bede-3571-4763-a7ba-55f8be80930d" (UID: "9b13bede-3571-4763-a7ba-55f8be80930d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:40:59 crc kubenswrapper[5118]: I1208 19:40:59.940854 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b13bede-3571-4763-a7ba-55f8be80930d-util" (OuterVolumeSpecName: "util") pod "9b13bede-3571-4763-a7ba-55f8be80930d" (UID: "9b13bede-3571-4763-a7ba-55f8be80930d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:40:59 crc kubenswrapper[5118]: I1208 19:40:59.942024 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b13bede-3571-4763-a7ba-55f8be80930d-kube-api-access-qhdql" (OuterVolumeSpecName: "kube-api-access-qhdql") pod "9b13bede-3571-4763-a7ba-55f8be80930d" (UID: "9b13bede-3571-4763-a7ba-55f8be80930d"). InnerVolumeSpecName "kube-api-access-qhdql". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.030762 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qhdql\" (UniqueName: \"kubernetes.io/projected/9b13bede-3571-4763-a7ba-55f8be80930d-kube-api-access-qhdql\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.031244 5118 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9b13bede-3571-4763-a7ba-55f8be80930d-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.031260 5118 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9b13bede-3571-4763-a7ba-55f8be80930d-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.052652 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-59f55fccbd-jxl49"] Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.053513 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9b13bede-3571-4763-a7ba-55f8be80930d" containerName="extract" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.053538 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b13bede-3571-4763-a7ba-55f8be80930d" containerName="extract" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.053547 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9b13bede-3571-4763-a7ba-55f8be80930d" containerName="pull" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.053556 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b13bede-3571-4763-a7ba-55f8be80930d" containerName="pull" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.053575 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9b13bede-3571-4763-a7ba-55f8be80930d" containerName="util" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.053583 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b13bede-3571-4763-a7ba-55f8be80930d" containerName="util" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.053740 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="9b13bede-3571-4763-a7ba-55f8be80930d" containerName="extract" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.061008 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-59f55fccbd-jxl49" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.064703 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.066060 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.066791 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.082731 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-59f55fccbd-jxl49"] Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.086429 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-q9ds5\"" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.145283 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1da1ad0b-dc96-43d5-9adf-7bbded6900a2-webhook-cert\") pod \"elastic-operator-59f55fccbd-jxl49\" (UID: \"1da1ad0b-dc96-43d5-9adf-7bbded6900a2\") " pod="service-telemetry/elastic-operator-59f55fccbd-jxl49" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.145388 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7zr5\" (UniqueName: \"kubernetes.io/projected/1da1ad0b-dc96-43d5-9adf-7bbded6900a2-kube-api-access-c7zr5\") pod \"elastic-operator-59f55fccbd-jxl49\" (UID: \"1da1ad0b-dc96-43d5-9adf-7bbded6900a2\") " pod="service-telemetry/elastic-operator-59f55fccbd-jxl49" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.145416 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1da1ad0b-dc96-43d5-9adf-7bbded6900a2-apiservice-cert\") pod \"elastic-operator-59f55fccbd-jxl49\" (UID: \"1da1ad0b-dc96-43d5-9adf-7bbded6900a2\") " pod="service-telemetry/elastic-operator-59f55fccbd-jxl49" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.247358 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c7zr5\" (UniqueName: \"kubernetes.io/projected/1da1ad0b-dc96-43d5-9adf-7bbded6900a2-kube-api-access-c7zr5\") pod \"elastic-operator-59f55fccbd-jxl49\" (UID: \"1da1ad0b-dc96-43d5-9adf-7bbded6900a2\") " pod="service-telemetry/elastic-operator-59f55fccbd-jxl49" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.247426 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1da1ad0b-dc96-43d5-9adf-7bbded6900a2-apiservice-cert\") pod \"elastic-operator-59f55fccbd-jxl49\" (UID: \"1da1ad0b-dc96-43d5-9adf-7bbded6900a2\") " pod="service-telemetry/elastic-operator-59f55fccbd-jxl49" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.247497 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1da1ad0b-dc96-43d5-9adf-7bbded6900a2-webhook-cert\") pod \"elastic-operator-59f55fccbd-jxl49\" (UID: \"1da1ad0b-dc96-43d5-9adf-7bbded6900a2\") " pod="service-telemetry/elastic-operator-59f55fccbd-jxl49" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.259338 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1da1ad0b-dc96-43d5-9adf-7bbded6900a2-apiservice-cert\") pod \"elastic-operator-59f55fccbd-jxl49\" (UID: \"1da1ad0b-dc96-43d5-9adf-7bbded6900a2\") " pod="service-telemetry/elastic-operator-59f55fccbd-jxl49" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.261297 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1da1ad0b-dc96-43d5-9adf-7bbded6900a2-webhook-cert\") pod \"elastic-operator-59f55fccbd-jxl49\" (UID: \"1da1ad0b-dc96-43d5-9adf-7bbded6900a2\") " pod="service-telemetry/elastic-operator-59f55fccbd-jxl49" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.272912 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7zr5\" (UniqueName: \"kubernetes.io/projected/1da1ad0b-dc96-43d5-9adf-7bbded6900a2-kube-api-access-c7zr5\") pod \"elastic-operator-59f55fccbd-jxl49\" (UID: \"1da1ad0b-dc96-43d5-9adf-7bbded6900a2\") " pod="service-telemetry/elastic-operator-59f55fccbd-jxl49" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.381291 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-59f55fccbd-jxl49" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.434645 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq" event={"ID":"9b13bede-3571-4763-a7ba-55f8be80930d","Type":"ContainerDied","Data":"e71ec9a6f727d6a05d64114bf2c8232e997d0cc8aa5787772d34e04a260ab451"} Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.434742 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e71ec9a6f727d6a05d64114bf2c8232e997d0cc8aa5787772d34e04a260ab451" Dec 08 19:41:00 crc kubenswrapper[5118]: I1208 19:41:00.434825 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7znrq" Dec 08 19:41:01 crc kubenswrapper[5118]: I1208 19:41:01.083624 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-59f55fccbd-jxl49"] Dec 08 19:41:01 crc kubenswrapper[5118]: I1208 19:41:01.445465 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-59f55fccbd-jxl49" event={"ID":"1da1ad0b-dc96-43d5-9adf-7bbded6900a2","Type":"ContainerStarted","Data":"003a5c1e9a8f3f16b8c4dc6a868821e3fd8cdcfd396913973338b4b30a87dbee"} Dec 08 19:41:12 crc kubenswrapper[5118]: I1208 19:41:12.474223 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-24qtf"] Dec 08 19:41:12 crc kubenswrapper[5118]: I1208 19:41:12.480393 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-24qtf" Dec 08 19:41:12 crc kubenswrapper[5118]: I1208 19:41:12.482469 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-llczw\"" Dec 08 19:41:12 crc kubenswrapper[5118]: I1208 19:41:12.483951 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 19:41:12 crc kubenswrapper[5118]: I1208 19:41:12.486411 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:41:12 crc kubenswrapper[5118]: I1208 19:41:12.492120 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-24qtf"] Dec 08 19:41:12 crc kubenswrapper[5118]: I1208 19:41:12.596775 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9541782a-3476-4fbc-9f14-1322868df391-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-24qtf\" (UID: \"9541782a-3476-4fbc-9f14-1322868df391\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-24qtf" Dec 08 19:41:12 crc kubenswrapper[5118]: I1208 19:41:12.597150 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47kwb\" (UniqueName: \"kubernetes.io/projected/9541782a-3476-4fbc-9f14-1322868df391-kube-api-access-47kwb\") pod \"cert-manager-operator-controller-manager-64c74584c4-24qtf\" (UID: \"9541782a-3476-4fbc-9f14-1322868df391\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-24qtf" Dec 08 19:41:12 crc kubenswrapper[5118]: I1208 19:41:12.698840 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9541782a-3476-4fbc-9f14-1322868df391-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-24qtf\" (UID: \"9541782a-3476-4fbc-9f14-1322868df391\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-24qtf" Dec 08 19:41:12 crc kubenswrapper[5118]: I1208 19:41:12.698890 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-47kwb\" (UniqueName: \"kubernetes.io/projected/9541782a-3476-4fbc-9f14-1322868df391-kube-api-access-47kwb\") pod \"cert-manager-operator-controller-manager-64c74584c4-24qtf\" (UID: \"9541782a-3476-4fbc-9f14-1322868df391\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-24qtf" Dec 08 19:41:12 crc kubenswrapper[5118]: I1208 19:41:12.699633 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9541782a-3476-4fbc-9f14-1322868df391-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-24qtf\" (UID: \"9541782a-3476-4fbc-9f14-1322868df391\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-24qtf" Dec 08 19:41:12 crc kubenswrapper[5118]: I1208 19:41:12.735798 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-47kwb\" (UniqueName: \"kubernetes.io/projected/9541782a-3476-4fbc-9f14-1322868df391-kube-api-access-47kwb\") pod \"cert-manager-operator-controller-manager-64c74584c4-24qtf\" (UID: \"9541782a-3476-4fbc-9f14-1322868df391\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-24qtf" Dec 08 19:41:12 crc kubenswrapper[5118]: I1208 19:41:12.803868 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-24qtf" Dec 08 19:41:16 crc kubenswrapper[5118]: I1208 19:41:16.264282 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-24qtf"] Dec 08 19:41:16 crc kubenswrapper[5118]: I1208 19:41:16.573942 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-rl29g" event={"ID":"0f50a79c-af0a-4520-9c18-2c686373e86e","Type":"ContainerStarted","Data":"e4c348377d264e0cc588c290a5d9c5ec13274203a31e1a94976db4357d559b3f"} Dec 08 19:41:16 crc kubenswrapper[5118]: I1208 19:41:16.580017 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-s2bkk" event={"ID":"4bd890c0-c2f1-4cac-aaea-a4c79efedc11","Type":"ContainerStarted","Data":"ce853882498e698ae8dbbb4f0f52cc40fe893977950884987bb56077fb3051b5"} Dec 08 19:41:16 crc kubenswrapper[5118]: I1208 19:41:16.581802 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-cddjw" event={"ID":"4dafe562-d7da-46d9-bef9-33bd0eb4e4ed","Type":"ContainerStarted","Data":"dbc6c4a8c14eb1753d3f07cb332352a97d9a0a8be16f2115361a43b4079abe4e"} Dec 08 19:41:16 crc kubenswrapper[5118]: I1208 19:41:16.582178 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-cddjw" Dec 08 19:41:16 crc kubenswrapper[5118]: I1208 19:41:16.587510 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-qw8pf" event={"ID":"49800b56-28af-432c-8b8e-68f8fb223895","Type":"ContainerStarted","Data":"8e3a7a5343d6c4ce0f2376a3da7102ecad64cc0c04c997a0b0b92f50fbb8d8c2"} Dec 08 19:41:16 crc kubenswrapper[5118]: I1208 19:41:16.588987 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-24qtf" event={"ID":"9541782a-3476-4fbc-9f14-1322868df391","Type":"ContainerStarted","Data":"e2b0772336fc01bd76c31656d2826951342c42f838bae639a3a48eb2e540fa91"} Dec 08 19:41:16 crc kubenswrapper[5118]: I1208 19:41:16.590037 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-jlh8f" event={"ID":"f099b829-f9f7-4c78-b52c-a079207ebca8","Type":"ContainerStarted","Data":"e83ff724282f034436fff88906611a5877c9f86e9a6b776ec791185ae38c7b06"} Dec 08 19:41:16 crc kubenswrapper[5118]: I1208 19:41:16.590707 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-jlh8f" Dec 08 19:41:16 crc kubenswrapper[5118]: I1208 19:41:16.596066 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-59f55fccbd-jxl49" event={"ID":"1da1ad0b-dc96-43d5-9adf-7bbded6900a2","Type":"ContainerStarted","Data":"cfcccd136bbd0516faf1f623056afc76c24cf9d26ae981db35aca32e361bd4e7"} Dec 08 19:41:16 crc kubenswrapper[5118]: I1208 19:41:16.647346 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-jlh8f" Dec 08 19:41:16 crc kubenswrapper[5118]: I1208 19:41:16.650466 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-rl29g" podStartSLOduration=2.209697619 podStartE2EDuration="21.650447957s" podCreationTimestamp="2025-12-08 19:40:55 +0000 UTC" firstStartedPulling="2025-12-08 19:40:56.5508594 +0000 UTC m=+708.843704857" lastFinishedPulling="2025-12-08 19:41:15.991609748 +0000 UTC m=+728.284455195" observedRunningTime="2025-12-08 19:41:16.641502214 +0000 UTC m=+728.934347681" watchObservedRunningTime="2025-12-08 19:41:16.650447957 +0000 UTC m=+728.943293414" Dec 08 19:41:16 crc kubenswrapper[5118]: I1208 19:41:16.704422 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-59f55fccbd-jxl49" podStartSLOduration=1.9861291049999998 podStartE2EDuration="16.704402976s" podCreationTimestamp="2025-12-08 19:41:00 +0000 UTC" firstStartedPulling="2025-12-08 19:41:01.122601705 +0000 UTC m=+713.415447162" lastFinishedPulling="2025-12-08 19:41:15.840875576 +0000 UTC m=+728.133721033" observedRunningTime="2025-12-08 19:41:16.698453693 +0000 UTC m=+728.991299150" watchObservedRunningTime="2025-12-08 19:41:16.704402976 +0000 UTC m=+728.997248443" Dec 08 19:41:16 crc kubenswrapper[5118]: I1208 19:41:16.737189 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-cddjw" podStartSLOduration=1.731018623 podStartE2EDuration="20.737168597s" podCreationTimestamp="2025-12-08 19:40:56 +0000 UTC" firstStartedPulling="2025-12-08 19:40:56.985518846 +0000 UTC m=+709.278364303" lastFinishedPulling="2025-12-08 19:41:15.99166882 +0000 UTC m=+728.284514277" observedRunningTime="2025-12-08 19:41:16.732528561 +0000 UTC m=+729.025374028" watchObservedRunningTime="2025-12-08 19:41:16.737168597 +0000 UTC m=+729.030014054" Dec 08 19:41:16 crc kubenswrapper[5118]: I1208 19:41:16.815127 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-jlh8f" podStartSLOduration=2.52409806 podStartE2EDuration="21.815107639s" podCreationTimestamp="2025-12-08 19:40:55 +0000 UTC" firstStartedPulling="2025-12-08 19:40:56.659014387 +0000 UTC m=+708.951859844" lastFinishedPulling="2025-12-08 19:41:15.950023966 +0000 UTC m=+728.242869423" observedRunningTime="2025-12-08 19:41:16.811756757 +0000 UTC m=+729.104602224" watchObservedRunningTime="2025-12-08 19:41:16.815107639 +0000 UTC m=+729.107953096" Dec 08 19:41:16 crc kubenswrapper[5118]: I1208 19:41:16.815620 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75ddbddc5c-s2bkk" podStartSLOduration=2.312109523 podStartE2EDuration="21.815613992s" podCreationTimestamp="2025-12-08 19:40:55 +0000 UTC" firstStartedPulling="2025-12-08 19:40:56.446602699 +0000 UTC m=+708.739448156" lastFinishedPulling="2025-12-08 19:41:15.950107168 +0000 UTC m=+728.242952625" observedRunningTime="2025-12-08 19:41:16.774966676 +0000 UTC m=+729.067812133" watchObservedRunningTime="2025-12-08 19:41:16.815613992 +0000 UTC m=+729.108459449" Dec 08 19:41:16 crc kubenswrapper[5118]: I1208 19:41:16.904570 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-qw8pf" podStartSLOduration=2.37810425 podStartE2EDuration="21.904552883s" podCreationTimestamp="2025-12-08 19:40:55 +0000 UTC" firstStartedPulling="2025-12-08 19:40:56.424116538 +0000 UTC m=+708.716961995" lastFinishedPulling="2025-12-08 19:41:15.950565161 +0000 UTC m=+728.243410628" observedRunningTime="2025-12-08 19:41:16.847810408 +0000 UTC m=+729.140655865" watchObservedRunningTime="2025-12-08 19:41:16.904552883 +0000 UTC m=+729.197398340" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.755717 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.763810 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.766819 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.766961 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.767215 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.767375 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.767435 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.770039 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.772850 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-ghl85\"" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.774209 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.776968 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.826095 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.865574 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.865629 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.865711 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.865758 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.865792 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.865818 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.865844 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.865876 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.865900 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/99ba5216-9597-4049-9545-8806d92b2aba-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.865933 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.865967 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.865988 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/99ba5216-9597-4049-9545-8806d92b2aba-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.866018 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.866041 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/99ba5216-9597-4049-9545-8806d92b2aba-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.866069 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/99ba5216-9597-4049-9545-8806d92b2aba-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.967430 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.967493 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.967516 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.967543 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.967577 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.967599 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/99ba5216-9597-4049-9545-8806d92b2aba-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.967623 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.967656 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.967679 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/99ba5216-9597-4049-9545-8806d92b2aba-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.967719 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.967736 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/99ba5216-9597-4049-9545-8806d92b2aba-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.967763 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/99ba5216-9597-4049-9545-8806d92b2aba-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.967801 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.967822 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.967901 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.967948 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.968208 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.968375 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/99ba5216-9597-4049-9545-8806d92b2aba-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.969138 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/99ba5216-9597-4049-9545-8806d92b2aba-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.969433 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.969645 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.970552 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.971036 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/99ba5216-9597-4049-9545-8806d92b2aba-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.975757 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.976580 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.977369 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/99ba5216-9597-4049-9545-8806d92b2aba-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.977398 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.977847 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.981572 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:17 crc kubenswrapper[5118]: I1208 19:41:17.994393 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/99ba5216-9597-4049-9545-8806d92b2aba-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"99ba5216-9597-4049-9545-8806d92b2aba\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:18 crc kubenswrapper[5118]: I1208 19:41:18.082271 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:18 crc kubenswrapper[5118]: I1208 19:41:18.586776 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:41:18 crc kubenswrapper[5118]: I1208 19:41:18.615966 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"99ba5216-9597-4049-9545-8806d92b2aba","Type":"ContainerStarted","Data":"b6ed031287122e1aebe9758e573c0541fb025a1066601a618ad116dbc87964b2"} Dec 08 19:41:20 crc kubenswrapper[5118]: I1208 19:41:20.630206 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-24qtf" event={"ID":"9541782a-3476-4fbc-9f14-1322868df391","Type":"ContainerStarted","Data":"023ead05e32573964ad71a8ff54724a70c5069993de2fc935d93cf0b165cb552"} Dec 08 19:41:20 crc kubenswrapper[5118]: I1208 19:41:20.651203 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-24qtf" podStartSLOduration=4.489357783 podStartE2EDuration="8.651188863s" podCreationTimestamp="2025-12-08 19:41:12 +0000 UTC" firstStartedPulling="2025-12-08 19:41:16.284882289 +0000 UTC m=+728.577727756" lastFinishedPulling="2025-12-08 19:41:20.446713379 +0000 UTC m=+732.739558836" observedRunningTime="2025-12-08 19:41:20.64735051 +0000 UTC m=+732.940195967" watchObservedRunningTime="2025-12-08 19:41:20.651188863 +0000 UTC m=+732.944034320" Dec 08 19:41:24 crc kubenswrapper[5118]: I1208 19:41:24.329749 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-k49rf" podUID="5a7dc4f4-9762-4968-b509-c2ee68240e9b" containerName="registry" containerID="cri-o://35e71916328b5ddf865ad73e3cbd75ade7d5eabe95aba64d542e2031fb0e8097" gracePeriod=30 Dec 08 19:41:24 crc kubenswrapper[5118]: I1208 19:41:24.706106 5118 generic.go:358] "Generic (PLEG): container finished" podID="5a7dc4f4-9762-4968-b509-c2ee68240e9b" containerID="35e71916328b5ddf865ad73e3cbd75ade7d5eabe95aba64d542e2031fb0e8097" exitCode=0 Dec 08 19:41:24 crc kubenswrapper[5118]: I1208 19:41:24.706288 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-k49rf" event={"ID":"5a7dc4f4-9762-4968-b509-c2ee68240e9b","Type":"ContainerDied","Data":"35e71916328b5ddf865ad73e3cbd75ade7d5eabe95aba64d542e2031fb0e8097"} Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.704599 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-9gkvt"] Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.705736 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.769910 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-k49rf" event={"ID":"5a7dc4f4-9762-4968-b509-c2ee68240e9b","Type":"ContainerDied","Data":"0a95173a440d45d9fb9572353a443c43833845d61847f9d3764583475aa7a2e0"} Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.769961 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-9gkvt"] Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.769987 5118 scope.go:117] "RemoveContainer" containerID="35e71916328b5ddf865ad73e3cbd75ade7d5eabe95aba64d542e2031fb0e8097" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.770044 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-k49rf" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.770194 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-9gkvt" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.772303 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.772613 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.775135 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-7mgdn\"" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.796848 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5a7dc4f4-9762-4968-b509-c2ee68240e9b-registry-tls\") pod \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.796896 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a7dc4f4-9762-4968-b509-c2ee68240e9b-trusted-ca\") pod \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.796932 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a7dc4f4-9762-4968-b509-c2ee68240e9b-bound-sa-token\") pod \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.796959 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5a7dc4f4-9762-4968-b509-c2ee68240e9b-ca-trust-extracted\") pod \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.796990 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjh9l\" (UniqueName: \"kubernetes.io/projected/5a7dc4f4-9762-4968-b509-c2ee68240e9b-kube-api-access-rjh9l\") pod \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.798993 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5a7dc4f4-9762-4968-b509-c2ee68240e9b-registry-certificates\") pod \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.799241 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.799330 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5a7dc4f4-9762-4968-b509-c2ee68240e9b-installation-pull-secrets\") pod \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\" (UID: \"5a7dc4f4-9762-4968-b509-c2ee68240e9b\") " Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.799670 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a7dc4f4-9762-4968-b509-c2ee68240e9b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "5a7dc4f4-9762-4968-b509-c2ee68240e9b" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.800534 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a7dc4f4-9762-4968-b509-c2ee68240e9b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5a7dc4f4-9762-4968-b509-c2ee68240e9b" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.806353 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a7dc4f4-9762-4968-b509-c2ee68240e9b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "5a7dc4f4-9762-4968-b509-c2ee68240e9b" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.806580 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a7dc4f4-9762-4968-b509-c2ee68240e9b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "5a7dc4f4-9762-4968-b509-c2ee68240e9b" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.806760 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a7dc4f4-9762-4968-b509-c2ee68240e9b-kube-api-access-rjh9l" (OuterVolumeSpecName: "kube-api-access-rjh9l") pod "5a7dc4f4-9762-4968-b509-c2ee68240e9b" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b"). InnerVolumeSpecName "kube-api-access-rjh9l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.812661 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "5a7dc4f4-9762-4968-b509-c2ee68240e9b" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.813366 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a7dc4f4-9762-4968-b509-c2ee68240e9b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "5a7dc4f4-9762-4968-b509-c2ee68240e9b" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.814111 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a7dc4f4-9762-4968-b509-c2ee68240e9b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "5a7dc4f4-9762-4968-b509-c2ee68240e9b" (UID: "5a7dc4f4-9762-4968-b509-c2ee68240e9b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.900484 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9tb9\" (UniqueName: \"kubernetes.io/projected/c4404f9c-cf9d-451f-8488-162a4476025a-kube-api-access-x9tb9\") pod \"cert-manager-cainjector-7dbf76d5c8-9gkvt\" (UID: \"c4404f9c-cf9d-451f-8488-162a4476025a\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-9gkvt" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.900760 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4404f9c-cf9d-451f-8488-162a4476025a-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-9gkvt\" (UID: \"c4404f9c-cf9d-451f-8488-162a4476025a\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-9gkvt" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.900837 5118 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5a7dc4f4-9762-4968-b509-c2ee68240e9b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.900847 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a7dc4f4-9762-4968-b509-c2ee68240e9b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.900855 5118 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a7dc4f4-9762-4968-b509-c2ee68240e9b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.900878 5118 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5a7dc4f4-9762-4968-b509-c2ee68240e9b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.900886 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rjh9l\" (UniqueName: \"kubernetes.io/projected/5a7dc4f4-9762-4968-b509-c2ee68240e9b-kube-api-access-rjh9l\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.900895 5118 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5a7dc4f4-9762-4968-b509-c2ee68240e9b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:25 crc kubenswrapper[5118]: I1208 19:41:25.900904 5118 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5a7dc4f4-9762-4968-b509-c2ee68240e9b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:26 crc kubenswrapper[5118]: I1208 19:41:26.002279 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x9tb9\" (UniqueName: \"kubernetes.io/projected/c4404f9c-cf9d-451f-8488-162a4476025a-kube-api-access-x9tb9\") pod \"cert-manager-cainjector-7dbf76d5c8-9gkvt\" (UID: \"c4404f9c-cf9d-451f-8488-162a4476025a\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-9gkvt" Dec 08 19:41:26 crc kubenswrapper[5118]: I1208 19:41:26.002351 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4404f9c-cf9d-451f-8488-162a4476025a-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-9gkvt\" (UID: \"c4404f9c-cf9d-451f-8488-162a4476025a\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-9gkvt" Dec 08 19:41:26 crc kubenswrapper[5118]: I1208 19:41:26.023817 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9tb9\" (UniqueName: \"kubernetes.io/projected/c4404f9c-cf9d-451f-8488-162a4476025a-kube-api-access-x9tb9\") pod \"cert-manager-cainjector-7dbf76d5c8-9gkvt\" (UID: \"c4404f9c-cf9d-451f-8488-162a4476025a\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-9gkvt" Dec 08 19:41:26 crc kubenswrapper[5118]: I1208 19:41:26.023956 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4404f9c-cf9d-451f-8488-162a4476025a-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-9gkvt\" (UID: \"c4404f9c-cf9d-451f-8488-162a4476025a\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-9gkvt" Dec 08 19:41:26 crc kubenswrapper[5118]: I1208 19:41:26.087678 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-9gkvt" Dec 08 19:41:26 crc kubenswrapper[5118]: I1208 19:41:26.108224 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-k49rf"] Dec 08 19:41:26 crc kubenswrapper[5118]: I1208 19:41:26.108262 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-k49rf"] Dec 08 19:41:26 crc kubenswrapper[5118]: I1208 19:41:26.543044 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-9gkvt"] Dec 08 19:41:26 crc kubenswrapper[5118]: W1208 19:41:26.561383 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4404f9c_cf9d_451f_8488_162a4476025a.slice/crio-28523820771bcfbe55e02251162d9910d083c07af6a5fb9be6ba887863d1469b WatchSource:0}: Error finding container 28523820771bcfbe55e02251162d9910d083c07af6a5fb9be6ba887863d1469b: Status 404 returned error can't find the container with id 28523820771bcfbe55e02251162d9910d083c07af6a5fb9be6ba887863d1469b Dec 08 19:41:26 crc kubenswrapper[5118]: I1208 19:41:26.726904 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-9gkvt" event={"ID":"c4404f9c-cf9d-451f-8488-162a4476025a","Type":"ContainerStarted","Data":"28523820771bcfbe55e02251162d9910d083c07af6a5fb9be6ba887863d1469b"} Dec 08 19:41:26 crc kubenswrapper[5118]: I1208 19:41:26.915678 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-6js66"] Dec 08 19:41:26 crc kubenswrapper[5118]: I1208 19:41:26.916782 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a7dc4f4-9762-4968-b509-c2ee68240e9b" containerName="registry" Dec 08 19:41:26 crc kubenswrapper[5118]: I1208 19:41:26.916806 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a7dc4f4-9762-4968-b509-c2ee68240e9b" containerName="registry" Dec 08 19:41:26 crc kubenswrapper[5118]: I1208 19:41:26.916997 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="5a7dc4f4-9762-4968-b509-c2ee68240e9b" containerName="registry" Dec 08 19:41:26 crc kubenswrapper[5118]: I1208 19:41:26.921189 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-6js66" Dec 08 19:41:26 crc kubenswrapper[5118]: I1208 19:41:26.921462 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-6js66"] Dec 08 19:41:26 crc kubenswrapper[5118]: I1208 19:41:26.923549 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-pvpxv\"" Dec 08 19:41:27 crc kubenswrapper[5118]: I1208 19:41:27.020467 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44bcd\" (UniqueName: \"kubernetes.io/projected/f464fe42-ce17-4e9f-a3e9-184b856e92fe-kube-api-access-44bcd\") pod \"cert-manager-webhook-7894b5b9b4-6js66\" (UID: \"f464fe42-ce17-4e9f-a3e9-184b856e92fe\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-6js66" Dec 08 19:41:27 crc kubenswrapper[5118]: I1208 19:41:27.020820 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f464fe42-ce17-4e9f-a3e9-184b856e92fe-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-6js66\" (UID: \"f464fe42-ce17-4e9f-a3e9-184b856e92fe\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-6js66" Dec 08 19:41:27 crc kubenswrapper[5118]: I1208 19:41:27.122537 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-44bcd\" (UniqueName: \"kubernetes.io/projected/f464fe42-ce17-4e9f-a3e9-184b856e92fe-kube-api-access-44bcd\") pod \"cert-manager-webhook-7894b5b9b4-6js66\" (UID: \"f464fe42-ce17-4e9f-a3e9-184b856e92fe\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-6js66" Dec 08 19:41:27 crc kubenswrapper[5118]: I1208 19:41:27.122622 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f464fe42-ce17-4e9f-a3e9-184b856e92fe-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-6js66\" (UID: \"f464fe42-ce17-4e9f-a3e9-184b856e92fe\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-6js66" Dec 08 19:41:27 crc kubenswrapper[5118]: I1208 19:41:27.148655 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f464fe42-ce17-4e9f-a3e9-184b856e92fe-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-6js66\" (UID: \"f464fe42-ce17-4e9f-a3e9-184b856e92fe\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-6js66" Dec 08 19:41:27 crc kubenswrapper[5118]: I1208 19:41:27.148935 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-44bcd\" (UniqueName: \"kubernetes.io/projected/f464fe42-ce17-4e9f-a3e9-184b856e92fe-kube-api-access-44bcd\") pod \"cert-manager-webhook-7894b5b9b4-6js66\" (UID: \"f464fe42-ce17-4e9f-a3e9-184b856e92fe\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-6js66" Dec 08 19:41:27 crc kubenswrapper[5118]: I1208 19:41:27.241623 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-6js66" Dec 08 19:41:28 crc kubenswrapper[5118]: I1208 19:41:28.114996 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a7dc4f4-9762-4968-b509-c2ee68240e9b" path="/var/lib/kubelet/pods/5a7dc4f4-9762-4968-b509-c2ee68240e9b/volumes" Dec 08 19:41:28 crc kubenswrapper[5118]: I1208 19:41:28.620449 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-cddjw" Dec 08 19:41:32 crc kubenswrapper[5118]: I1208 19:41:32.659236 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-6js66"] Dec 08 19:41:32 crc kubenswrapper[5118]: I1208 19:41:32.764989 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"99ba5216-9597-4049-9545-8806d92b2aba","Type":"ContainerStarted","Data":"ec025fa3b469d6e947d6495755b013d4756ed5928e15d36d5e95c0d88b346cc0"} Dec 08 19:41:32 crc kubenswrapper[5118]: I1208 19:41:32.766863 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-6js66" event={"ID":"f464fe42-ce17-4e9f-a3e9-184b856e92fe","Type":"ContainerStarted","Data":"e5f974f433011e9e255da95d87a7514e6528c4a0d7662326db34d546576ef3c4"} Dec 08 19:41:32 crc kubenswrapper[5118]: I1208 19:41:32.956991 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:41:32 crc kubenswrapper[5118]: I1208 19:41:32.983276 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:41:35 crc kubenswrapper[5118]: I1208 19:41:35.784732 5118 generic.go:358] "Generic (PLEG): container finished" podID="99ba5216-9597-4049-9545-8806d92b2aba" containerID="ec025fa3b469d6e947d6495755b013d4756ed5928e15d36d5e95c0d88b346cc0" exitCode=0 Dec 08 19:41:35 crc kubenswrapper[5118]: I1208 19:41:35.784807 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"99ba5216-9597-4049-9545-8806d92b2aba","Type":"ContainerDied","Data":"ec025fa3b469d6e947d6495755b013d4756ed5928e15d36d5e95c0d88b346cc0"} Dec 08 19:41:39 crc kubenswrapper[5118]: I1208 19:41:39.813292 5118 generic.go:358] "Generic (PLEG): container finished" podID="99ba5216-9597-4049-9545-8806d92b2aba" containerID="1660a4aa582557eae2fb842a50d6a45945a2ce7c075c87446c968a6452beedb7" exitCode=0 Dec 08 19:41:39 crc kubenswrapper[5118]: I1208 19:41:39.813398 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"99ba5216-9597-4049-9545-8806d92b2aba","Type":"ContainerDied","Data":"1660a4aa582557eae2fb842a50d6a45945a2ce7c075c87446c968a6452beedb7"} Dec 08 19:41:42 crc kubenswrapper[5118]: I1208 19:41:42.657083 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-vnp4j"] Dec 08 19:41:42 crc kubenswrapper[5118]: I1208 19:41:42.683872 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-vnp4j"] Dec 08 19:41:42 crc kubenswrapper[5118]: I1208 19:41:42.684010 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-vnp4j" Dec 08 19:41:42 crc kubenswrapper[5118]: I1208 19:41:42.685787 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-s8d6j\"" Dec 08 19:41:42 crc kubenswrapper[5118]: I1208 19:41:42.721400 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f5e1b10-b0db-4eff-b4f4-e866480a60b2-bound-sa-token\") pod \"cert-manager-858d87f86b-vnp4j\" (UID: \"8f5e1b10-b0db-4eff-b4f4-e866480a60b2\") " pod="cert-manager/cert-manager-858d87f86b-vnp4j" Dec 08 19:41:42 crc kubenswrapper[5118]: I1208 19:41:42.721504 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvbdd\" (UniqueName: \"kubernetes.io/projected/8f5e1b10-b0db-4eff-b4f4-e866480a60b2-kube-api-access-gvbdd\") pod \"cert-manager-858d87f86b-vnp4j\" (UID: \"8f5e1b10-b0db-4eff-b4f4-e866480a60b2\") " pod="cert-manager/cert-manager-858d87f86b-vnp4j" Dec 08 19:41:42 crc kubenswrapper[5118]: I1208 19:41:42.823183 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gvbdd\" (UniqueName: \"kubernetes.io/projected/8f5e1b10-b0db-4eff-b4f4-e866480a60b2-kube-api-access-gvbdd\") pod \"cert-manager-858d87f86b-vnp4j\" (UID: \"8f5e1b10-b0db-4eff-b4f4-e866480a60b2\") " pod="cert-manager/cert-manager-858d87f86b-vnp4j" Dec 08 19:41:42 crc kubenswrapper[5118]: I1208 19:41:42.823511 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f5e1b10-b0db-4eff-b4f4-e866480a60b2-bound-sa-token\") pod \"cert-manager-858d87f86b-vnp4j\" (UID: \"8f5e1b10-b0db-4eff-b4f4-e866480a60b2\") " pod="cert-manager/cert-manager-858d87f86b-vnp4j" Dec 08 19:41:42 crc kubenswrapper[5118]: I1208 19:41:42.846273 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f5e1b10-b0db-4eff-b4f4-e866480a60b2-bound-sa-token\") pod \"cert-manager-858d87f86b-vnp4j\" (UID: \"8f5e1b10-b0db-4eff-b4f4-e866480a60b2\") " pod="cert-manager/cert-manager-858d87f86b-vnp4j" Dec 08 19:41:42 crc kubenswrapper[5118]: I1208 19:41:42.846619 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvbdd\" (UniqueName: \"kubernetes.io/projected/8f5e1b10-b0db-4eff-b4f4-e866480a60b2-kube-api-access-gvbdd\") pod \"cert-manager-858d87f86b-vnp4j\" (UID: \"8f5e1b10-b0db-4eff-b4f4-e866480a60b2\") " pod="cert-manager/cert-manager-858d87f86b-vnp4j" Dec 08 19:41:43 crc kubenswrapper[5118]: I1208 19:41:43.002528 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-vnp4j" Dec 08 19:41:43 crc kubenswrapper[5118]: I1208 19:41:43.207943 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-vnp4j"] Dec 08 19:41:43 crc kubenswrapper[5118]: W1208 19:41:43.214055 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f5e1b10_b0db_4eff_b4f4_e866480a60b2.slice/crio-02c61b4cdcaa7fdd3e44e32de25121bfa6785f8493ccb24fbe5aeab7d8da092b WatchSource:0}: Error finding container 02c61b4cdcaa7fdd3e44e32de25121bfa6785f8493ccb24fbe5aeab7d8da092b: Status 404 returned error can't find the container with id 02c61b4cdcaa7fdd3e44e32de25121bfa6785f8493ccb24fbe5aeab7d8da092b Dec 08 19:41:43 crc kubenswrapper[5118]: I1208 19:41:43.848045 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-9gkvt" event={"ID":"c4404f9c-cf9d-451f-8488-162a4476025a","Type":"ContainerStarted","Data":"60dddd17e3caa504f3427dedf2793349637c9d0a63b30445b1b47d92ca4b9144"} Dec 08 19:41:43 crc kubenswrapper[5118]: I1208 19:41:43.853544 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"99ba5216-9597-4049-9545-8806d92b2aba","Type":"ContainerStarted","Data":"5c0c3f5825c1c013a40adc3140fc3b79ac51d7d0dda60f3444c0a117ae7f5a81"} Dec 08 19:41:43 crc kubenswrapper[5118]: I1208 19:41:43.854091 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:43 crc kubenswrapper[5118]: I1208 19:41:43.855643 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-vnp4j" event={"ID":"8f5e1b10-b0db-4eff-b4f4-e866480a60b2","Type":"ContainerStarted","Data":"583be19554a437863f205008a398d2070b27e95ecb20c1f4bcd29fbae32a4ec6"} Dec 08 19:41:43 crc kubenswrapper[5118]: I1208 19:41:43.855729 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-vnp4j" event={"ID":"8f5e1b10-b0db-4eff-b4f4-e866480a60b2","Type":"ContainerStarted","Data":"02c61b4cdcaa7fdd3e44e32de25121bfa6785f8493ccb24fbe5aeab7d8da092b"} Dec 08 19:41:43 crc kubenswrapper[5118]: I1208 19:41:43.857521 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-6js66" event={"ID":"f464fe42-ce17-4e9f-a3e9-184b856e92fe","Type":"ContainerStarted","Data":"4f8f89da69601429babb95160816014b358b7d9926b967a9b3eb7178a8e344b0"} Dec 08 19:41:43 crc kubenswrapper[5118]: I1208 19:41:43.858717 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-6js66" Dec 08 19:41:43 crc kubenswrapper[5118]: I1208 19:41:43.905490 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-9gkvt" podStartSLOduration=2.663852677 podStartE2EDuration="18.90546598s" podCreationTimestamp="2025-12-08 19:41:25 +0000 UTC" firstStartedPulling="2025-12-08 19:41:26.564353795 +0000 UTC m=+738.857199252" lastFinishedPulling="2025-12-08 19:41:42.805967088 +0000 UTC m=+755.098812555" observedRunningTime="2025-12-08 19:41:43.878848656 +0000 UTC m=+756.171694123" watchObservedRunningTime="2025-12-08 19:41:43.90546598 +0000 UTC m=+756.198311437" Dec 08 19:41:43 crc kubenswrapper[5118]: I1208 19:41:43.907648 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-6js66" podStartSLOduration=7.746269653 podStartE2EDuration="17.907635618s" podCreationTimestamp="2025-12-08 19:41:26 +0000 UTC" firstStartedPulling="2025-12-08 19:41:32.676511891 +0000 UTC m=+744.969357368" lastFinishedPulling="2025-12-08 19:41:42.837877886 +0000 UTC m=+755.130723333" observedRunningTime="2025-12-08 19:41:43.907144765 +0000 UTC m=+756.199990242" watchObservedRunningTime="2025-12-08 19:41:43.907635618 +0000 UTC m=+756.200481075" Dec 08 19:41:43 crc kubenswrapper[5118]: I1208 19:41:43.933612 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-vnp4j" podStartSLOduration=1.933597535 podStartE2EDuration="1.933597535s" podCreationTimestamp="2025-12-08 19:41:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:41:43.930595894 +0000 UTC m=+756.223441351" watchObservedRunningTime="2025-12-08 19:41:43.933597535 +0000 UTC m=+756.226442992" Dec 08 19:41:43 crc kubenswrapper[5118]: I1208 19:41:43.986276 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=13.15738513 podStartE2EDuration="26.986258339s" podCreationTimestamp="2025-12-08 19:41:17 +0000 UTC" firstStartedPulling="2025-12-08 19:41:18.59658054 +0000 UTC m=+730.889425997" lastFinishedPulling="2025-12-08 19:41:32.425453749 +0000 UTC m=+744.718299206" observedRunningTime="2025-12-08 19:41:43.98154032 +0000 UTC m=+756.274385807" watchObservedRunningTime="2025-12-08 19:41:43.986258339 +0000 UTC m=+756.279103806" Dec 08 19:41:50 crc kubenswrapper[5118]: I1208 19:41:50.885580 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-6js66" Dec 08 19:41:54 crc kubenswrapper[5118]: I1208 19:41:54.972498 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="99ba5216-9597-4049-9545-8806d92b2aba" containerName="elasticsearch" probeResult="failure" output=< Dec 08 19:41:54 crc kubenswrapper[5118]: {"timestamp": "2025-12-08T19:41:54+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 08 19:41:54 crc kubenswrapper[5118]: > Dec 08 19:42:00 crc kubenswrapper[5118]: I1208 19:42:00.058471 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:07 crc kubenswrapper[5118]: I1208 19:42:07.970268 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 08 19:42:07 crc kubenswrapper[5118]: I1208 19:42:07.975516 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:07 crc kubenswrapper[5118]: I1208 19:42:07.977564 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-ca\"" Dec 08 19:42:07 crc kubenswrapper[5118]: I1208 19:42:07.977750 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-sys-config\"" Dec 08 19:42:07 crc kubenswrapper[5118]: I1208 19:42:07.977861 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-global-ca\"" Dec 08 19:42:07 crc kubenswrapper[5118]: I1208 19:42:07.978573 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hwdd5\"" Dec 08 19:42:07 crc kubenswrapper[5118]: I1208 19:42:07.978657 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 08 19:42:07 crc kubenswrapper[5118]: I1208 19:42:07.987143 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.069980 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb123cf9-0ae2-4efd-841e-1abac745d9a5-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.070232 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.070340 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.070443 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.070568 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/cb123cf9-0ae2-4efd-841e-1abac745d9a5-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.070659 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cb123cf9-0ae2-4efd-841e-1abac745d9a5-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.070773 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hwdd5-push\" (UniqueName: \"kubernetes.io/secret/cb123cf9-0ae2-4efd-841e-1abac745d9a5-builder-dockercfg-hwdd5-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.070879 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndf7h\" (UniqueName: \"kubernetes.io/projected/cb123cf9-0ae2-4efd-841e-1abac745d9a5-kube-api-access-ndf7h\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.070977 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hwdd5-pull\" (UniqueName: \"kubernetes.io/secret/cb123cf9-0ae2-4efd-841e-1abac745d9a5-builder-dockercfg-hwdd5-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.071083 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.071181 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.071284 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.071410 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.172787 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ndf7h\" (UniqueName: \"kubernetes.io/projected/cb123cf9-0ae2-4efd-841e-1abac745d9a5-kube-api-access-ndf7h\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.172842 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hwdd5-pull\" (UniqueName: \"kubernetes.io/secret/cb123cf9-0ae2-4efd-841e-1abac745d9a5-builder-dockercfg-hwdd5-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.172868 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.172898 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.172932 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.172982 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.173036 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb123cf9-0ae2-4efd-841e-1abac745d9a5-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.173062 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.173086 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.173115 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.173169 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/cb123cf9-0ae2-4efd-841e-1abac745d9a5-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.173194 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cb123cf9-0ae2-4efd-841e-1abac745d9a5-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.173217 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hwdd5-push\" (UniqueName: \"kubernetes.io/secret/cb123cf9-0ae2-4efd-841e-1abac745d9a5-builder-dockercfg-hwdd5-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.173517 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb123cf9-0ae2-4efd-841e-1abac745d9a5-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.173861 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cb123cf9-0ae2-4efd-841e-1abac745d9a5-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.173915 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.173945 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.174067 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.174319 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.176720 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-sys-config\"" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.176976 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-ca\"" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.177210 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-global-ca\"" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.177445 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.177828 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hwdd5\"" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.184733 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.185228 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.186562 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.190383 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hwdd5-push\" (UniqueName: \"kubernetes.io/secret/cb123cf9-0ae2-4efd-841e-1abac745d9a5-builder-dockercfg-hwdd5-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.190388 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hwdd5-pull\" (UniqueName: \"kubernetes.io/secret/cb123cf9-0ae2-4efd-841e-1abac745d9a5-builder-dockercfg-hwdd5-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.190917 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/cb123cf9-0ae2-4efd-841e-1abac745d9a5-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.200669 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndf7h\" (UniqueName: \"kubernetes.io/projected/cb123cf9-0ae2-4efd-841e-1abac745d9a5-kube-api-access-ndf7h\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.292871 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:08 crc kubenswrapper[5118]: I1208 19:42:08.511627 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 08 19:42:08 crc kubenswrapper[5118]: W1208 19:42:08.516018 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb123cf9_0ae2_4efd_841e_1abac745d9a5.slice/crio-d348d6ed7ed4cd5b3fad927512aeeac6275cacde416df2cb14a0bcbc2ea75395 WatchSource:0}: Error finding container d348d6ed7ed4cd5b3fad927512aeeac6275cacde416df2cb14a0bcbc2ea75395: Status 404 returned error can't find the container with id d348d6ed7ed4cd5b3fad927512aeeac6275cacde416df2cb14a0bcbc2ea75395 Dec 08 19:42:09 crc kubenswrapper[5118]: I1208 19:42:09.020806 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"cb123cf9-0ae2-4efd-841e-1abac745d9a5","Type":"ContainerStarted","Data":"d348d6ed7ed4cd5b3fad927512aeeac6275cacde416df2cb14a0bcbc2ea75395"} Dec 08 19:42:09 crc kubenswrapper[5118]: I1208 19:42:09.467224 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:42:09 crc kubenswrapper[5118]: I1208 19:42:09.467313 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:42:14 crc kubenswrapper[5118]: I1208 19:42:14.058134 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"cb123cf9-0ae2-4efd-841e-1abac745d9a5","Type":"ContainerStarted","Data":"0361d2ed09d4942530c3a74ea7484f1e3a384f650a1834fd268c35331515f1cd"} Dec 08 19:42:14 crc kubenswrapper[5118]: I1208 19:42:14.112137 5118 ???:1] "http: TLS handshake error from 192.168.126.11:56676: no serving certificate available for the kubelet" Dec 08 19:42:15 crc kubenswrapper[5118]: I1208 19:42:15.137705 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 08 19:42:16 crc kubenswrapper[5118]: I1208 19:42:16.069545 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-1-build" podUID="cb123cf9-0ae2-4efd-841e-1abac745d9a5" containerName="git-clone" containerID="cri-o://0361d2ed09d4942530c3a74ea7484f1e3a384f650a1834fd268c35331515f1cd" gracePeriod=30 Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.077777 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_cb123cf9-0ae2-4efd-841e-1abac745d9a5/git-clone/0.log" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.077832 5118 generic.go:358] "Generic (PLEG): container finished" podID="cb123cf9-0ae2-4efd-841e-1abac745d9a5" containerID="0361d2ed09d4942530c3a74ea7484f1e3a384f650a1834fd268c35331515f1cd" exitCode=1 Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.077976 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"cb123cf9-0ae2-4efd-841e-1abac745d9a5","Type":"ContainerDied","Data":"0361d2ed09d4942530c3a74ea7484f1e3a384f650a1834fd268c35331515f1cd"} Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.805917 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_cb123cf9-0ae2-4efd-841e-1abac745d9a5/git-clone/0.log" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.806205 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.906046 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-container-storage-root\") pod \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.906112 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb123cf9-0ae2-4efd-841e-1abac745d9a5-node-pullsecrets\") pod \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.906152 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cb123cf9-0ae2-4efd-841e-1abac745d9a5-buildcachedir\") pod \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.906185 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-proxy-ca-bundles\") pod \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.906231 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/cb123cf9-0ae2-4efd-841e-1abac745d9a5-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.906253 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-system-configs\") pod \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.906261 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb123cf9-0ae2-4efd-841e-1abac745d9a5-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "cb123cf9-0ae2-4efd-841e-1abac745d9a5" (UID: "cb123cf9-0ae2-4efd-841e-1abac745d9a5"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.906295 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-blob-cache\") pod \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.906381 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hwdd5-pull\" (UniqueName: \"kubernetes.io/secret/cb123cf9-0ae2-4efd-841e-1abac745d9a5-builder-dockercfg-hwdd5-pull\") pod \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.906413 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-buildworkdir\") pod \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.906438 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-container-storage-run\") pod \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.906442 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "cb123cf9-0ae2-4efd-841e-1abac745d9a5" (UID: "cb123cf9-0ae2-4efd-841e-1abac745d9a5"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.906471 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hwdd5-push\" (UniqueName: \"kubernetes.io/secret/cb123cf9-0ae2-4efd-841e-1abac745d9a5-builder-dockercfg-hwdd5-push\") pod \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.906537 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-ca-bundles\") pod \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.906555 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndf7h\" (UniqueName: \"kubernetes.io/projected/cb123cf9-0ae2-4efd-841e-1abac745d9a5-kube-api-access-ndf7h\") pod \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\" (UID: \"cb123cf9-0ae2-4efd-841e-1abac745d9a5\") " Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.906674 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "cb123cf9-0ae2-4efd-841e-1abac745d9a5" (UID: "cb123cf9-0ae2-4efd-841e-1abac745d9a5"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.906763 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb123cf9-0ae2-4efd-841e-1abac745d9a5-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "cb123cf9-0ae2-4efd-841e-1abac745d9a5" (UID: "cb123cf9-0ae2-4efd-841e-1abac745d9a5"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.906799 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "cb123cf9-0ae2-4efd-841e-1abac745d9a5" (UID: "cb123cf9-0ae2-4efd-841e-1abac745d9a5"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.907709 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "cb123cf9-0ae2-4efd-841e-1abac745d9a5" (UID: "cb123cf9-0ae2-4efd-841e-1abac745d9a5"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.907754 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "cb123cf9-0ae2-4efd-841e-1abac745d9a5" (UID: "cb123cf9-0ae2-4efd-841e-1abac745d9a5"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.907988 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "cb123cf9-0ae2-4efd-841e-1abac745d9a5" (UID: "cb123cf9-0ae2-4efd-841e-1abac745d9a5"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.908638 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "cb123cf9-0ae2-4efd-841e-1abac745d9a5" (UID: "cb123cf9-0ae2-4efd-841e-1abac745d9a5"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.909231 5118 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.909253 5118 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.909263 5118 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb123cf9-0ae2-4efd-841e-1abac745d9a5-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.909272 5118 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cb123cf9-0ae2-4efd-841e-1abac745d9a5-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.909282 5118 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.909290 5118 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.909299 5118 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.909308 5118 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.909316 5118 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cb123cf9-0ae2-4efd-841e-1abac745d9a5-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.913469 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb123cf9-0ae2-4efd-841e-1abac745d9a5-kube-api-access-ndf7h" (OuterVolumeSpecName: "kube-api-access-ndf7h") pod "cb123cf9-0ae2-4efd-841e-1abac745d9a5" (UID: "cb123cf9-0ae2-4efd-841e-1abac745d9a5"). InnerVolumeSpecName "kube-api-access-ndf7h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.913570 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb123cf9-0ae2-4efd-841e-1abac745d9a5-builder-dockercfg-hwdd5-pull" (OuterVolumeSpecName: "builder-dockercfg-hwdd5-pull") pod "cb123cf9-0ae2-4efd-841e-1abac745d9a5" (UID: "cb123cf9-0ae2-4efd-841e-1abac745d9a5"). InnerVolumeSpecName "builder-dockercfg-hwdd5-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.913676 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb123cf9-0ae2-4efd-841e-1abac745d9a5-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "cb123cf9-0ae2-4efd-841e-1abac745d9a5" (UID: "cb123cf9-0ae2-4efd-841e-1abac745d9a5"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:42:17 crc kubenswrapper[5118]: I1208 19:42:17.913836 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb123cf9-0ae2-4efd-841e-1abac745d9a5-builder-dockercfg-hwdd5-push" (OuterVolumeSpecName: "builder-dockercfg-hwdd5-push") pod "cb123cf9-0ae2-4efd-841e-1abac745d9a5" (UID: "cb123cf9-0ae2-4efd-841e-1abac745d9a5"). InnerVolumeSpecName "builder-dockercfg-hwdd5-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:42:18 crc kubenswrapper[5118]: I1208 19:42:18.010876 5118 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/cb123cf9-0ae2-4efd-841e-1abac745d9a5-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:18 crc kubenswrapper[5118]: I1208 19:42:18.010914 5118 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hwdd5-pull\" (UniqueName: \"kubernetes.io/secret/cb123cf9-0ae2-4efd-841e-1abac745d9a5-builder-dockercfg-hwdd5-pull\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:18 crc kubenswrapper[5118]: I1208 19:42:18.010924 5118 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hwdd5-push\" (UniqueName: \"kubernetes.io/secret/cb123cf9-0ae2-4efd-841e-1abac745d9a5-builder-dockercfg-hwdd5-push\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:18 crc kubenswrapper[5118]: I1208 19:42:18.010937 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ndf7h\" (UniqueName: \"kubernetes.io/projected/cb123cf9-0ae2-4efd-841e-1abac745d9a5-kube-api-access-ndf7h\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:18 crc kubenswrapper[5118]: I1208 19:42:18.086359 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_cb123cf9-0ae2-4efd-841e-1abac745d9a5/git-clone/0.log" Dec 08 19:42:18 crc kubenswrapper[5118]: I1208 19:42:18.086495 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"cb123cf9-0ae2-4efd-841e-1abac745d9a5","Type":"ContainerDied","Data":"d348d6ed7ed4cd5b3fad927512aeeac6275cacde416df2cb14a0bcbc2ea75395"} Dec 08 19:42:18 crc kubenswrapper[5118]: I1208 19:42:18.086532 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 19:42:18 crc kubenswrapper[5118]: I1208 19:42:18.086554 5118 scope.go:117] "RemoveContainer" containerID="0361d2ed09d4942530c3a74ea7484f1e3a384f650a1834fd268c35331515f1cd" Dec 08 19:42:18 crc kubenswrapper[5118]: I1208 19:42:18.128390 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 08 19:42:18 crc kubenswrapper[5118]: I1208 19:42:18.141376 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 08 19:42:20 crc kubenswrapper[5118]: I1208 19:42:20.102706 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb123cf9-0ae2-4efd-841e-1abac745d9a5" path="/var/lib/kubelet/pods/cb123cf9-0ae2-4efd-841e-1abac745d9a5/volumes" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.530961 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.532368 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cb123cf9-0ae2-4efd-841e-1abac745d9a5" containerName="git-clone" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.532393 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb123cf9-0ae2-4efd-841e-1abac745d9a5" containerName="git-clone" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.532728 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="cb123cf9-0ae2-4efd-841e-1abac745d9a5" containerName="git-clone" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.557283 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.557466 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.559632 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.560203 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-sys-config\"" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.560489 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-global-ca\"" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.560616 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-ca\"" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.565662 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hwdd5\"" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.631082 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.631142 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh9pc\" (UniqueName: \"kubernetes.io/projected/1f4512ed-74b0-4025-aa13-2d8c2234ae53-kube-api-access-dh9pc\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.631165 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1f4512ed-74b0-4025-aa13-2d8c2234ae53-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.631213 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f4512ed-74b0-4025-aa13-2d8c2234ae53-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.631243 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.631262 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.631357 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.631467 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1f4512ed-74b0-4025-aa13-2d8c2234ae53-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.631518 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hwdd5-pull\" (UniqueName: \"kubernetes.io/secret/1f4512ed-74b0-4025-aa13-2d8c2234ae53-builder-dockercfg-hwdd5-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.631558 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.631668 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.631766 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hwdd5-push\" (UniqueName: \"kubernetes.io/secret/1f4512ed-74b0-4025-aa13-2d8c2234ae53-builder-dockercfg-hwdd5-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.631792 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.733354 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f4512ed-74b0-4025-aa13-2d8c2234ae53-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.733531 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f4512ed-74b0-4025-aa13-2d8c2234ae53-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.733846 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.733964 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.734071 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.734205 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1f4512ed-74b0-4025-aa13-2d8c2234ae53-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.734327 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hwdd5-pull\" (UniqueName: \"kubernetes.io/secret/1f4512ed-74b0-4025-aa13-2d8c2234ae53-builder-dockercfg-hwdd5-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.734797 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.734921 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.735044 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hwdd5-push\" (UniqueName: \"kubernetes.io/secret/1f4512ed-74b0-4025-aa13-2d8c2234ae53-builder-dockercfg-hwdd5-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.735143 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.735247 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.734356 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.734225 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.734408 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.734261 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1f4512ed-74b0-4025-aa13-2d8c2234ae53-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.735527 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.735668 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dh9pc\" (UniqueName: \"kubernetes.io/projected/1f4512ed-74b0-4025-aa13-2d8c2234ae53-kube-api-access-dh9pc\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.735813 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1f4512ed-74b0-4025-aa13-2d8c2234ae53-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.735823 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.735719 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.736276 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.740940 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1f4512ed-74b0-4025-aa13-2d8c2234ae53-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.740970 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hwdd5-push\" (UniqueName: \"kubernetes.io/secret/1f4512ed-74b0-4025-aa13-2d8c2234ae53-builder-dockercfg-hwdd5-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.741557 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hwdd5-pull\" (UniqueName: \"kubernetes.io/secret/1f4512ed-74b0-4025-aa13-2d8c2234ae53-builder-dockercfg-hwdd5-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.754050 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh9pc\" (UniqueName: \"kubernetes.io/projected/1f4512ed-74b0-4025-aa13-2d8c2234ae53-kube-api-access-dh9pc\") pod \"service-telemetry-framework-index-2-build\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:26 crc kubenswrapper[5118]: I1208 19:42:26.877159 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:27 crc kubenswrapper[5118]: I1208 19:42:27.153624 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 08 19:42:28 crc kubenswrapper[5118]: I1208 19:42:28.157256 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"1f4512ed-74b0-4025-aa13-2d8c2234ae53","Type":"ContainerStarted","Data":"56c37ad7caace1a535debc0c8b255b2f6e26572ec795e486579238c1f72702f9"} Dec 08 19:42:28 crc kubenswrapper[5118]: I1208 19:42:28.157542 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"1f4512ed-74b0-4025-aa13-2d8c2234ae53","Type":"ContainerStarted","Data":"a86d71291d416bc9c7c87c8ca5e7d0e659a97bb811b4d22c18deafec9f07daf2"} Dec 08 19:42:28 crc kubenswrapper[5118]: I1208 19:42:28.213484 5118 ???:1] "http: TLS handshake error from 192.168.126.11:42330: no serving certificate available for the kubelet" Dec 08 19:42:29 crc kubenswrapper[5118]: I1208 19:42:29.244903 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.170489 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-2-build" podUID="1f4512ed-74b0-4025-aa13-2d8c2234ae53" containerName="git-clone" containerID="cri-o://56c37ad7caace1a535debc0c8b255b2f6e26572ec795e486579238c1f72702f9" gracePeriod=30 Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.557866 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_1f4512ed-74b0-4025-aa13-2d8c2234ae53/git-clone/0.log" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.557937 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.691592 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-system-configs\") pod \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.691652 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh9pc\" (UniqueName: \"kubernetes.io/projected/1f4512ed-74b0-4025-aa13-2d8c2234ae53-kube-api-access-dh9pc\") pod \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.691710 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-buildworkdir\") pod \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.691762 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-container-storage-root\") pod \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.691794 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-ca-bundles\") pod \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.691841 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f4512ed-74b0-4025-aa13-2d8c2234ae53-node-pullsecrets\") pod \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.691877 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1f4512ed-74b0-4025-aa13-2d8c2234ae53-buildcachedir\") pod \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.691907 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hwdd5-push\" (UniqueName: \"kubernetes.io/secret/1f4512ed-74b0-4025-aa13-2d8c2234ae53-builder-dockercfg-hwdd5-push\") pod \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.691968 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-blob-cache\") pod \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.692031 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hwdd5-pull\" (UniqueName: \"kubernetes.io/secret/1f4512ed-74b0-4025-aa13-2d8c2234ae53-builder-dockercfg-hwdd5-pull\") pod \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.692145 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1f4512ed-74b0-4025-aa13-2d8c2234ae53-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.692204 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-proxy-ca-bundles\") pod \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.692286 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-container-storage-run\") pod \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\" (UID: \"1f4512ed-74b0-4025-aa13-2d8c2234ae53\") " Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.692730 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "1f4512ed-74b0-4025-aa13-2d8c2234ae53" (UID: "1f4512ed-74b0-4025-aa13-2d8c2234ae53"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.692905 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f4512ed-74b0-4025-aa13-2d8c2234ae53-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "1f4512ed-74b0-4025-aa13-2d8c2234ae53" (UID: "1f4512ed-74b0-4025-aa13-2d8c2234ae53"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.693036 5118 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.693278 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f4512ed-74b0-4025-aa13-2d8c2234ae53-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "1f4512ed-74b0-4025-aa13-2d8c2234ae53" (UID: "1f4512ed-74b0-4025-aa13-2d8c2234ae53"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.693433 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "1f4512ed-74b0-4025-aa13-2d8c2234ae53" (UID: "1f4512ed-74b0-4025-aa13-2d8c2234ae53"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.693932 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "1f4512ed-74b0-4025-aa13-2d8c2234ae53" (UID: "1f4512ed-74b0-4025-aa13-2d8c2234ae53"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.694111 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "1f4512ed-74b0-4025-aa13-2d8c2234ae53" (UID: "1f4512ed-74b0-4025-aa13-2d8c2234ae53"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.694327 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "1f4512ed-74b0-4025-aa13-2d8c2234ae53" (UID: "1f4512ed-74b0-4025-aa13-2d8c2234ae53"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.694347 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "1f4512ed-74b0-4025-aa13-2d8c2234ae53" (UID: "1f4512ed-74b0-4025-aa13-2d8c2234ae53"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.694336 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "1f4512ed-74b0-4025-aa13-2d8c2234ae53" (UID: "1f4512ed-74b0-4025-aa13-2d8c2234ae53"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.699551 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4512ed-74b0-4025-aa13-2d8c2234ae53-builder-dockercfg-hwdd5-pull" (OuterVolumeSpecName: "builder-dockercfg-hwdd5-pull") pod "1f4512ed-74b0-4025-aa13-2d8c2234ae53" (UID: "1f4512ed-74b0-4025-aa13-2d8c2234ae53"). InnerVolumeSpecName "builder-dockercfg-hwdd5-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.700235 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4512ed-74b0-4025-aa13-2d8c2234ae53-builder-dockercfg-hwdd5-push" (OuterVolumeSpecName: "builder-dockercfg-hwdd5-push") pod "1f4512ed-74b0-4025-aa13-2d8c2234ae53" (UID: "1f4512ed-74b0-4025-aa13-2d8c2234ae53"). InnerVolumeSpecName "builder-dockercfg-hwdd5-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.700586 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4512ed-74b0-4025-aa13-2d8c2234ae53-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "1f4512ed-74b0-4025-aa13-2d8c2234ae53" (UID: "1f4512ed-74b0-4025-aa13-2d8c2234ae53"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.701108 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f4512ed-74b0-4025-aa13-2d8c2234ae53-kube-api-access-dh9pc" (OuterVolumeSpecName: "kube-api-access-dh9pc") pod "1f4512ed-74b0-4025-aa13-2d8c2234ae53" (UID: "1f4512ed-74b0-4025-aa13-2d8c2234ae53"). InnerVolumeSpecName "kube-api-access-dh9pc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.795111 5118 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1f4512ed-74b0-4025-aa13-2d8c2234ae53-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.795353 5118 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.795387 5118 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.795411 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dh9pc\" (UniqueName: \"kubernetes.io/projected/1f4512ed-74b0-4025-aa13-2d8c2234ae53-kube-api-access-dh9pc\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.795436 5118 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.795461 5118 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.795487 5118 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.795511 5118 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f4512ed-74b0-4025-aa13-2d8c2234ae53-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.795534 5118 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1f4512ed-74b0-4025-aa13-2d8c2234ae53-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.795557 5118 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hwdd5-push\" (UniqueName: \"kubernetes.io/secret/1f4512ed-74b0-4025-aa13-2d8c2234ae53-builder-dockercfg-hwdd5-push\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.795580 5118 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1f4512ed-74b0-4025-aa13-2d8c2234ae53-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:30 crc kubenswrapper[5118]: I1208 19:42:30.795605 5118 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hwdd5-pull\" (UniqueName: \"kubernetes.io/secret/1f4512ed-74b0-4025-aa13-2d8c2234ae53-builder-dockercfg-hwdd5-pull\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:31 crc kubenswrapper[5118]: I1208 19:42:31.185890 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_1f4512ed-74b0-4025-aa13-2d8c2234ae53/git-clone/0.log" Dec 08 19:42:31 crc kubenswrapper[5118]: I1208 19:42:31.186246 5118 generic.go:358] "Generic (PLEG): container finished" podID="1f4512ed-74b0-4025-aa13-2d8c2234ae53" containerID="56c37ad7caace1a535debc0c8b255b2f6e26572ec795e486579238c1f72702f9" exitCode=1 Dec 08 19:42:31 crc kubenswrapper[5118]: I1208 19:42:31.186398 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 19:42:31 crc kubenswrapper[5118]: I1208 19:42:31.186486 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"1f4512ed-74b0-4025-aa13-2d8c2234ae53","Type":"ContainerDied","Data":"56c37ad7caace1a535debc0c8b255b2f6e26572ec795e486579238c1f72702f9"} Dec 08 19:42:31 crc kubenswrapper[5118]: I1208 19:42:31.186583 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"1f4512ed-74b0-4025-aa13-2d8c2234ae53","Type":"ContainerDied","Data":"a86d71291d416bc9c7c87c8ca5e7d0e659a97bb811b4d22c18deafec9f07daf2"} Dec 08 19:42:31 crc kubenswrapper[5118]: I1208 19:42:31.186637 5118 scope.go:117] "RemoveContainer" containerID="56c37ad7caace1a535debc0c8b255b2f6e26572ec795e486579238c1f72702f9" Dec 08 19:42:31 crc kubenswrapper[5118]: I1208 19:42:31.221187 5118 scope.go:117] "RemoveContainer" containerID="56c37ad7caace1a535debc0c8b255b2f6e26572ec795e486579238c1f72702f9" Dec 08 19:42:31 crc kubenswrapper[5118]: E1208 19:42:31.221896 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56c37ad7caace1a535debc0c8b255b2f6e26572ec795e486579238c1f72702f9\": container with ID starting with 56c37ad7caace1a535debc0c8b255b2f6e26572ec795e486579238c1f72702f9 not found: ID does not exist" containerID="56c37ad7caace1a535debc0c8b255b2f6e26572ec795e486579238c1f72702f9" Dec 08 19:42:31 crc kubenswrapper[5118]: I1208 19:42:31.221939 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56c37ad7caace1a535debc0c8b255b2f6e26572ec795e486579238c1f72702f9"} err="failed to get container status \"56c37ad7caace1a535debc0c8b255b2f6e26572ec795e486579238c1f72702f9\": rpc error: code = NotFound desc = could not find container \"56c37ad7caace1a535debc0c8b255b2f6e26572ec795e486579238c1f72702f9\": container with ID starting with 56c37ad7caace1a535debc0c8b255b2f6e26572ec795e486579238c1f72702f9 not found: ID does not exist" Dec 08 19:42:31 crc kubenswrapper[5118]: I1208 19:42:31.239967 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 08 19:42:31 crc kubenswrapper[5118]: I1208 19:42:31.243282 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 08 19:42:32 crc kubenswrapper[5118]: I1208 19:42:32.107247 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f4512ed-74b0-4025-aa13-2d8c2234ae53" path="/var/lib/kubelet/pods/1f4512ed-74b0-4025-aa13-2d8c2234ae53/volumes" Dec 08 19:42:39 crc kubenswrapper[5118]: I1208 19:42:39.468209 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:42:39 crc kubenswrapper[5118]: I1208 19:42:39.468683 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.644390 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.645077 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f4512ed-74b0-4025-aa13-2d8c2234ae53" containerName="git-clone" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.645090 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4512ed-74b0-4025-aa13-2d8c2234ae53" containerName="git-clone" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.645194 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f4512ed-74b0-4025-aa13-2d8c2234ae53" containerName="git-clone" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.658385 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.660298 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-global-ca\"" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.660759 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.660982 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-sys-config\"" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.661123 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-ca\"" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.661182 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hwdd5\"" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.668409 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.736265 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.736517 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.736546 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.736571 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.736593 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjh8x\" (UniqueName: \"kubernetes.io/projected/7fb934d2-5ce7-47eb-9eb4-380018624b3f-kube-api-access-sjh8x\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.736643 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hwdd5-push\" (UniqueName: \"kubernetes.io/secret/7fb934d2-5ce7-47eb-9eb4-380018624b3f-builder-dockercfg-hwdd5-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.736672 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/7fb934d2-5ce7-47eb-9eb4-380018624b3f-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.736769 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7fb934d2-5ce7-47eb-9eb4-380018624b3f-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.736819 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.736850 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.736928 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7fb934d2-5ce7-47eb-9eb4-380018624b3f-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.736960 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.737010 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hwdd5-pull\" (UniqueName: \"kubernetes.io/secret/7fb934d2-5ce7-47eb-9eb4-380018624b3f-builder-dockercfg-hwdd5-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.838629 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hwdd5-pull\" (UniqueName: \"kubernetes.io/secret/7fb934d2-5ce7-47eb-9eb4-380018624b3f-builder-dockercfg-hwdd5-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.838677 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.838725 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.838758 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.838789 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.838812 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sjh8x\" (UniqueName: \"kubernetes.io/projected/7fb934d2-5ce7-47eb-9eb4-380018624b3f-kube-api-access-sjh8x\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.838862 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hwdd5-push\" (UniqueName: \"kubernetes.io/secret/7fb934d2-5ce7-47eb-9eb4-380018624b3f-builder-dockercfg-hwdd5-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.838891 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/7fb934d2-5ce7-47eb-9eb4-380018624b3f-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.838942 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7fb934d2-5ce7-47eb-9eb4-380018624b3f-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.838976 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.839001 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.839058 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7fb934d2-5ce7-47eb-9eb4-380018624b3f-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.839097 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.839148 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.839260 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7fb934d2-5ce7-47eb-9eb4-380018624b3f-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.839317 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.839333 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.839532 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.839588 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7fb934d2-5ce7-47eb-9eb4-380018624b3f-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.839606 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.839909 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.840262 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.845712 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hwdd5-push\" (UniqueName: \"kubernetes.io/secret/7fb934d2-5ce7-47eb-9eb4-380018624b3f-builder-dockercfg-hwdd5-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.846164 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hwdd5-pull\" (UniqueName: \"kubernetes.io/secret/7fb934d2-5ce7-47eb-9eb4-380018624b3f-builder-dockercfg-hwdd5-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.846596 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/7fb934d2-5ce7-47eb-9eb4-380018624b3f-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.854457 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjh8x\" (UniqueName: \"kubernetes.io/projected/7fb934d2-5ce7-47eb-9eb4-380018624b3f-kube-api-access-sjh8x\") pod \"service-telemetry-framework-index-3-build\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:40 crc kubenswrapper[5118]: I1208 19:42:40.991088 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:41 crc kubenswrapper[5118]: I1208 19:42:41.254492 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 08 19:42:41 crc kubenswrapper[5118]: I1208 19:42:41.402433 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"7fb934d2-5ce7-47eb-9eb4-380018624b3f","Type":"ContainerStarted","Data":"5249651ee836e64dea6f48a790e04a619279c5eac8e1984bf98e7325d8a357a7"} Dec 08 19:42:42 crc kubenswrapper[5118]: I1208 19:42:42.412381 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"7fb934d2-5ce7-47eb-9eb4-380018624b3f","Type":"ContainerStarted","Data":"76ebffeb3e74b82f7b9b6da7a0fb6f66f6e64e2e4135182e9d49a22b9bd75ef6"} Dec 08 19:42:42 crc kubenswrapper[5118]: I1208 19:42:42.466952 5118 ???:1] "http: TLS handshake error from 192.168.126.11:37258: no serving certificate available for the kubelet" Dec 08 19:42:43 crc kubenswrapper[5118]: I1208 19:42:43.502803 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.427156 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-3-build" podUID="7fb934d2-5ce7-47eb-9eb4-380018624b3f" containerName="git-clone" containerID="cri-o://76ebffeb3e74b82f7b9b6da7a0fb6f66f6e64e2e4135182e9d49a22b9bd75ef6" gracePeriod=30 Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.801699 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_7fb934d2-5ce7-47eb-9eb4-380018624b3f/git-clone/0.log" Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.802114 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.903209 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-system-configs\") pod \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.903272 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/7fb934d2-5ce7-47eb-9eb4-380018624b3f-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.903293 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7fb934d2-5ce7-47eb-9eb4-380018624b3f-buildcachedir\") pod \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.903356 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hwdd5-pull\" (UniqueName: \"kubernetes.io/secret/7fb934d2-5ce7-47eb-9eb4-380018624b3f-builder-dockercfg-hwdd5-pull\") pod \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.903392 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hwdd5-push\" (UniqueName: \"kubernetes.io/secret/7fb934d2-5ce7-47eb-9eb4-380018624b3f-builder-dockercfg-hwdd5-push\") pod \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.903507 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fb934d2-5ce7-47eb-9eb4-380018624b3f-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "7fb934d2-5ce7-47eb-9eb4-380018624b3f" (UID: "7fb934d2-5ce7-47eb-9eb4-380018624b3f"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.903600 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-container-storage-root\") pod \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.903705 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-container-storage-run\") pod \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.903742 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjh8x\" (UniqueName: \"kubernetes.io/projected/7fb934d2-5ce7-47eb-9eb4-380018624b3f-kube-api-access-sjh8x\") pod \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.903858 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-proxy-ca-bundles\") pod \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.903879 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-blob-cache\") pod \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.903905 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-buildworkdir\") pod \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.903925 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-ca-bundles\") pod \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.903967 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7fb934d2-5ce7-47eb-9eb4-380018624b3f-node-pullsecrets\") pod \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\" (UID: \"7fb934d2-5ce7-47eb-9eb4-380018624b3f\") " Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.904092 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "7fb934d2-5ce7-47eb-9eb4-380018624b3f" (UID: "7fb934d2-5ce7-47eb-9eb4-380018624b3f"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.904140 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "7fb934d2-5ce7-47eb-9eb4-380018624b3f" (UID: "7fb934d2-5ce7-47eb-9eb4-380018624b3f"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.904493 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "7fb934d2-5ce7-47eb-9eb4-380018624b3f" (UID: "7fb934d2-5ce7-47eb-9eb4-380018624b3f"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.904509 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "7fb934d2-5ce7-47eb-9eb4-380018624b3f" (UID: "7fb934d2-5ce7-47eb-9eb4-380018624b3f"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.904636 5118 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.904664 5118 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.904680 5118 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.904716 5118 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7fb934d2-5ce7-47eb-9eb4-380018624b3f-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.904733 5118 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.904771 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fb934d2-5ce7-47eb-9eb4-380018624b3f-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "7fb934d2-5ce7-47eb-9eb4-380018624b3f" (UID: "7fb934d2-5ce7-47eb-9eb4-380018624b3f"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.904868 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "7fb934d2-5ce7-47eb-9eb4-380018624b3f" (UID: "7fb934d2-5ce7-47eb-9eb4-380018624b3f"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.905024 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "7fb934d2-5ce7-47eb-9eb4-380018624b3f" (UID: "7fb934d2-5ce7-47eb-9eb4-380018624b3f"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.905301 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "7fb934d2-5ce7-47eb-9eb4-380018624b3f" (UID: "7fb934d2-5ce7-47eb-9eb4-380018624b3f"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.908542 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fb934d2-5ce7-47eb-9eb4-380018624b3f-builder-dockercfg-hwdd5-push" (OuterVolumeSpecName: "builder-dockercfg-hwdd5-push") pod "7fb934d2-5ce7-47eb-9eb4-380018624b3f" (UID: "7fb934d2-5ce7-47eb-9eb4-380018624b3f"). InnerVolumeSpecName "builder-dockercfg-hwdd5-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.908887 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fb934d2-5ce7-47eb-9eb4-380018624b3f-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "7fb934d2-5ce7-47eb-9eb4-380018624b3f" (UID: "7fb934d2-5ce7-47eb-9eb4-380018624b3f"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.911403 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fb934d2-5ce7-47eb-9eb4-380018624b3f-kube-api-access-sjh8x" (OuterVolumeSpecName: "kube-api-access-sjh8x") pod "7fb934d2-5ce7-47eb-9eb4-380018624b3f" (UID: "7fb934d2-5ce7-47eb-9eb4-380018624b3f"). InnerVolumeSpecName "kube-api-access-sjh8x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:42:44 crc kubenswrapper[5118]: I1208 19:42:44.913849 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fb934d2-5ce7-47eb-9eb4-380018624b3f-builder-dockercfg-hwdd5-pull" (OuterVolumeSpecName: "builder-dockercfg-hwdd5-pull") pod "7fb934d2-5ce7-47eb-9eb4-380018624b3f" (UID: "7fb934d2-5ce7-47eb-9eb4-380018624b3f"). InnerVolumeSpecName "builder-dockercfg-hwdd5-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:42:45 crc kubenswrapper[5118]: I1208 19:42:45.006082 5118 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hwdd5-push\" (UniqueName: \"kubernetes.io/secret/7fb934d2-5ce7-47eb-9eb4-380018624b3f-builder-dockercfg-hwdd5-push\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:45 crc kubenswrapper[5118]: I1208 19:42:45.006351 5118 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7fb934d2-5ce7-47eb-9eb4-380018624b3f-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:45 crc kubenswrapper[5118]: I1208 19:42:45.006481 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sjh8x\" (UniqueName: \"kubernetes.io/projected/7fb934d2-5ce7-47eb-9eb4-380018624b3f-kube-api-access-sjh8x\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:45 crc kubenswrapper[5118]: I1208 19:42:45.006562 5118 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:45 crc kubenswrapper[5118]: I1208 19:42:45.006652 5118 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fb934d2-5ce7-47eb-9eb4-380018624b3f-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:45 crc kubenswrapper[5118]: I1208 19:42:45.006751 5118 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7fb934d2-5ce7-47eb-9eb4-380018624b3f-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:45 crc kubenswrapper[5118]: I1208 19:42:45.006847 5118 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/7fb934d2-5ce7-47eb-9eb4-380018624b3f-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:45 crc kubenswrapper[5118]: I1208 19:42:45.006930 5118 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hwdd5-pull\" (UniqueName: \"kubernetes.io/secret/7fb934d2-5ce7-47eb-9eb4-380018624b3f-builder-dockercfg-hwdd5-pull\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:45 crc kubenswrapper[5118]: I1208 19:42:45.442183 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_7fb934d2-5ce7-47eb-9eb4-380018624b3f/git-clone/0.log" Dec 08 19:42:45 crc kubenswrapper[5118]: I1208 19:42:45.442251 5118 generic.go:358] "Generic (PLEG): container finished" podID="7fb934d2-5ce7-47eb-9eb4-380018624b3f" containerID="76ebffeb3e74b82f7b9b6da7a0fb6f66f6e64e2e4135182e9d49a22b9bd75ef6" exitCode=1 Dec 08 19:42:45 crc kubenswrapper[5118]: I1208 19:42:45.442405 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"7fb934d2-5ce7-47eb-9eb4-380018624b3f","Type":"ContainerDied","Data":"76ebffeb3e74b82f7b9b6da7a0fb6f66f6e64e2e4135182e9d49a22b9bd75ef6"} Dec 08 19:42:45 crc kubenswrapper[5118]: I1208 19:42:45.442441 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"7fb934d2-5ce7-47eb-9eb4-380018624b3f","Type":"ContainerDied","Data":"5249651ee836e64dea6f48a790e04a619279c5eac8e1984bf98e7325d8a357a7"} Dec 08 19:42:45 crc kubenswrapper[5118]: I1208 19:42:45.442464 5118 scope.go:117] "RemoveContainer" containerID="76ebffeb3e74b82f7b9b6da7a0fb6f66f6e64e2e4135182e9d49a22b9bd75ef6" Dec 08 19:42:45 crc kubenswrapper[5118]: I1208 19:42:45.442764 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 19:42:45 crc kubenswrapper[5118]: I1208 19:42:45.476161 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 08 19:42:45 crc kubenswrapper[5118]: I1208 19:42:45.477591 5118 scope.go:117] "RemoveContainer" containerID="76ebffeb3e74b82f7b9b6da7a0fb6f66f6e64e2e4135182e9d49a22b9bd75ef6" Dec 08 19:42:45 crc kubenswrapper[5118]: E1208 19:42:45.479120 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76ebffeb3e74b82f7b9b6da7a0fb6f66f6e64e2e4135182e9d49a22b9bd75ef6\": container with ID starting with 76ebffeb3e74b82f7b9b6da7a0fb6f66f6e64e2e4135182e9d49a22b9bd75ef6 not found: ID does not exist" containerID="76ebffeb3e74b82f7b9b6da7a0fb6f66f6e64e2e4135182e9d49a22b9bd75ef6" Dec 08 19:42:45 crc kubenswrapper[5118]: I1208 19:42:45.479165 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76ebffeb3e74b82f7b9b6da7a0fb6f66f6e64e2e4135182e9d49a22b9bd75ef6"} err="failed to get container status \"76ebffeb3e74b82f7b9b6da7a0fb6f66f6e64e2e4135182e9d49a22b9bd75ef6\": rpc error: code = NotFound desc = could not find container \"76ebffeb3e74b82f7b9b6da7a0fb6f66f6e64e2e4135182e9d49a22b9bd75ef6\": container with ID starting with 76ebffeb3e74b82f7b9b6da7a0fb6f66f6e64e2e4135182e9d49a22b9bd75ef6 not found: ID does not exist" Dec 08 19:42:45 crc kubenswrapper[5118]: I1208 19:42:45.482245 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 08 19:42:46 crc kubenswrapper[5118]: I1208 19:42:46.107899 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fb934d2-5ce7-47eb-9eb4-380018624b3f" path="/var/lib/kubelet/pods/7fb934d2-5ce7-47eb-9eb4-380018624b3f/volumes" Dec 08 19:42:54 crc kubenswrapper[5118]: I1208 19:42:54.960385 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 08 19:42:54 crc kubenswrapper[5118]: I1208 19:42:54.961538 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7fb934d2-5ce7-47eb-9eb4-380018624b3f" containerName="git-clone" Dec 08 19:42:54 crc kubenswrapper[5118]: I1208 19:42:54.961561 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb934d2-5ce7-47eb-9eb4-380018624b3f" containerName="git-clone" Dec 08 19:42:54 crc kubenswrapper[5118]: I1208 19:42:54.961678 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="7fb934d2-5ce7-47eb-9eb4-380018624b3f" containerName="git-clone" Dec 08 19:42:54 crc kubenswrapper[5118]: I1208 19:42:54.973423 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:54 crc kubenswrapper[5118]: I1208 19:42:54.975373 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 08 19:42:54 crc kubenswrapper[5118]: I1208 19:42:54.976467 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hwdd5\"" Dec 08 19:42:54 crc kubenswrapper[5118]: I1208 19:42:54.976736 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-ca\"" Dec 08 19:42:54 crc kubenswrapper[5118]: I1208 19:42:54.976924 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 08 19:42:54 crc kubenswrapper[5118]: I1208 19:42:54.977105 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-global-ca\"" Dec 08 19:42:54 crc kubenswrapper[5118]: I1208 19:42:54.977281 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-sys-config\"" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.103906 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23d9d81e-fbe2-4ef1-8052-7efaa819a311-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.103952 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/23d9d81e-fbe2-4ef1-8052-7efaa819a311-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.103971 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.104117 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.104211 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hwdd5-push\" (UniqueName: \"kubernetes.io/secret/23d9d81e-fbe2-4ef1-8052-7efaa819a311-builder-dockercfg-hwdd5-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.104249 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.104387 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.104428 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bghm9\" (UniqueName: \"kubernetes.io/projected/23d9d81e-fbe2-4ef1-8052-7efaa819a311-kube-api-access-bghm9\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.104513 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.104728 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/23d9d81e-fbe2-4ef1-8052-7efaa819a311-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.104861 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.104900 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hwdd5-pull\" (UniqueName: \"kubernetes.io/secret/23d9d81e-fbe2-4ef1-8052-7efaa819a311-builder-dockercfg-hwdd5-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.104926 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.206263 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.206334 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hwdd5-push\" (UniqueName: \"kubernetes.io/secret/23d9d81e-fbe2-4ef1-8052-7efaa819a311-builder-dockercfg-hwdd5-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.206524 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.206670 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.206733 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bghm9\" (UniqueName: \"kubernetes.io/projected/23d9d81e-fbe2-4ef1-8052-7efaa819a311-kube-api-access-bghm9\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.206787 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.207207 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.207521 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.207606 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.207762 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/23d9d81e-fbe2-4ef1-8052-7efaa819a311-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.207798 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.208037 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.208086 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hwdd5-pull\" (UniqueName: \"kubernetes.io/secret/23d9d81e-fbe2-4ef1-8052-7efaa819a311-builder-dockercfg-hwdd5-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.208110 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.208182 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23d9d81e-fbe2-4ef1-8052-7efaa819a311-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.208216 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/23d9d81e-fbe2-4ef1-8052-7efaa819a311-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.208249 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.208635 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23d9d81e-fbe2-4ef1-8052-7efaa819a311-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.208894 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.208977 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/23d9d81e-fbe2-4ef1-8052-7efaa819a311-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.208981 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.209576 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.213420 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/23d9d81e-fbe2-4ef1-8052-7efaa819a311-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.213763 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hwdd5-push\" (UniqueName: \"kubernetes.io/secret/23d9d81e-fbe2-4ef1-8052-7efaa819a311-builder-dockercfg-hwdd5-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.214580 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hwdd5-pull\" (UniqueName: \"kubernetes.io/secret/23d9d81e-fbe2-4ef1-8052-7efaa819a311-builder-dockercfg-hwdd5-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.229937 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bghm9\" (UniqueName: \"kubernetes.io/projected/23d9d81e-fbe2-4ef1-8052-7efaa819a311-kube-api-access-bghm9\") pod \"service-telemetry-framework-index-4-build\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.292104 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:55 crc kubenswrapper[5118]: I1208 19:42:55.543523 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 08 19:42:56 crc kubenswrapper[5118]: I1208 19:42:56.521673 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"23d9d81e-fbe2-4ef1-8052-7efaa819a311","Type":"ContainerStarted","Data":"8aefe745f839c0ddfb52838c05bb498eecc29ee3342afe252361967e3dad22f6"} Dec 08 19:42:56 crc kubenswrapper[5118]: I1208 19:42:56.521770 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"23d9d81e-fbe2-4ef1-8052-7efaa819a311","Type":"ContainerStarted","Data":"6b7b394c668cf54ad2945feb147b46bfe37642c852b45645d083c1822fa726ef"} Dec 08 19:42:56 crc kubenswrapper[5118]: I1208 19:42:56.569536 5118 ???:1] "http: TLS handshake error from 192.168.126.11:36196: no serving certificate available for the kubelet" Dec 08 19:42:57 crc kubenswrapper[5118]: I1208 19:42:57.598104 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.534179 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-4-build" podUID="23d9d81e-fbe2-4ef1-8052-7efaa819a311" containerName="git-clone" containerID="cri-o://8aefe745f839c0ddfb52838c05bb498eecc29ee3342afe252361967e3dad22f6" gracePeriod=30 Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.887551 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_23d9d81e-fbe2-4ef1-8052-7efaa819a311/git-clone/0.log" Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.887967 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.915245 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-container-storage-root\") pod \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.915292 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hwdd5-push\" (UniqueName: \"kubernetes.io/secret/23d9d81e-fbe2-4ef1-8052-7efaa819a311-builder-dockercfg-hwdd5-push\") pod \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.915332 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/23d9d81e-fbe2-4ef1-8052-7efaa819a311-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.915377 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-ca-bundles\") pod \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.915424 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-blob-cache\") pod \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.915755 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-container-storage-run\") pod \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.915811 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-proxy-ca-bundles\") pod \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.915810 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "23d9d81e-fbe2-4ef1-8052-7efaa819a311" (UID: "23d9d81e-fbe2-4ef1-8052-7efaa819a311"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.915861 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bghm9\" (UniqueName: \"kubernetes.io/projected/23d9d81e-fbe2-4ef1-8052-7efaa819a311-kube-api-access-bghm9\") pod \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.915901 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/23d9d81e-fbe2-4ef1-8052-7efaa819a311-buildcachedir\") pod \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.915918 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23d9d81e-fbe2-4ef1-8052-7efaa819a311-node-pullsecrets\") pod \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.915945 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-system-configs\") pod \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.915972 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-buildworkdir\") pod \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.915985 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "23d9d81e-fbe2-4ef1-8052-7efaa819a311" (UID: "23d9d81e-fbe2-4ef1-8052-7efaa819a311"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.916007 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hwdd5-pull\" (UniqueName: \"kubernetes.io/secret/23d9d81e-fbe2-4ef1-8052-7efaa819a311-builder-dockercfg-hwdd5-pull\") pod \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\" (UID: \"23d9d81e-fbe2-4ef1-8052-7efaa819a311\") " Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.916023 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23d9d81e-fbe2-4ef1-8052-7efaa819a311-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "23d9d81e-fbe2-4ef1-8052-7efaa819a311" (UID: "23d9d81e-fbe2-4ef1-8052-7efaa819a311"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.916043 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23d9d81e-fbe2-4ef1-8052-7efaa819a311-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "23d9d81e-fbe2-4ef1-8052-7efaa819a311" (UID: "23d9d81e-fbe2-4ef1-8052-7efaa819a311"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.916358 5118 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/23d9d81e-fbe2-4ef1-8052-7efaa819a311-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.916370 5118 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23d9d81e-fbe2-4ef1-8052-7efaa819a311-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.916378 5118 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.916387 5118 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.916633 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "23d9d81e-fbe2-4ef1-8052-7efaa819a311" (UID: "23d9d81e-fbe2-4ef1-8052-7efaa819a311"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.916646 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "23d9d81e-fbe2-4ef1-8052-7efaa819a311" (UID: "23d9d81e-fbe2-4ef1-8052-7efaa819a311"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.916680 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "23d9d81e-fbe2-4ef1-8052-7efaa819a311" (UID: "23d9d81e-fbe2-4ef1-8052-7efaa819a311"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.916876 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "23d9d81e-fbe2-4ef1-8052-7efaa819a311" (UID: "23d9d81e-fbe2-4ef1-8052-7efaa819a311"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.917034 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "23d9d81e-fbe2-4ef1-8052-7efaa819a311" (UID: "23d9d81e-fbe2-4ef1-8052-7efaa819a311"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.922848 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23d9d81e-fbe2-4ef1-8052-7efaa819a311-builder-dockercfg-hwdd5-push" (OuterVolumeSpecName: "builder-dockercfg-hwdd5-push") pod "23d9d81e-fbe2-4ef1-8052-7efaa819a311" (UID: "23d9d81e-fbe2-4ef1-8052-7efaa819a311"). InnerVolumeSpecName "builder-dockercfg-hwdd5-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.922848 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23d9d81e-fbe2-4ef1-8052-7efaa819a311-kube-api-access-bghm9" (OuterVolumeSpecName: "kube-api-access-bghm9") pod "23d9d81e-fbe2-4ef1-8052-7efaa819a311" (UID: "23d9d81e-fbe2-4ef1-8052-7efaa819a311"). InnerVolumeSpecName "kube-api-access-bghm9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.922905 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23d9d81e-fbe2-4ef1-8052-7efaa819a311-builder-dockercfg-hwdd5-pull" (OuterVolumeSpecName: "builder-dockercfg-hwdd5-pull") pod "23d9d81e-fbe2-4ef1-8052-7efaa819a311" (UID: "23d9d81e-fbe2-4ef1-8052-7efaa819a311"). InnerVolumeSpecName "builder-dockercfg-hwdd5-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:42:58 crc kubenswrapper[5118]: I1208 19:42:58.922914 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23d9d81e-fbe2-4ef1-8052-7efaa819a311-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "23d9d81e-fbe2-4ef1-8052-7efaa819a311" (UID: "23d9d81e-fbe2-4ef1-8052-7efaa819a311"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.017039 5118 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.017071 5118 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.017080 5118 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hwdd5-pull\" (UniqueName: \"kubernetes.io/secret/23d9d81e-fbe2-4ef1-8052-7efaa819a311-builder-dockercfg-hwdd5-pull\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.017090 5118 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hwdd5-push\" (UniqueName: \"kubernetes.io/secret/23d9d81e-fbe2-4ef1-8052-7efaa819a311-builder-dockercfg-hwdd5-push\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.017098 5118 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/23d9d81e-fbe2-4ef1-8052-7efaa819a311-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.017108 5118 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.017116 5118 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/23d9d81e-fbe2-4ef1-8052-7efaa819a311-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.017125 5118 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23d9d81e-fbe2-4ef1-8052-7efaa819a311-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.017133 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bghm9\" (UniqueName: \"kubernetes.io/projected/23d9d81e-fbe2-4ef1-8052-7efaa819a311-kube-api-access-bghm9\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.177108 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-6xfg6"] Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.177869 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="23d9d81e-fbe2-4ef1-8052-7efaa819a311" containerName="git-clone" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.177891 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="23d9d81e-fbe2-4ef1-8052-7efaa819a311" containerName="git-clone" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.178054 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="23d9d81e-fbe2-4ef1-8052-7efaa819a311" containerName="git-clone" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.194001 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-6xfg6"] Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.194090 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-6xfg6" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.200204 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-vvj5z\"" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.219088 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr2bp\" (UniqueName: \"kubernetes.io/projected/72e2262b-0259-4a9e-b03c-cc4d3683bc44-kube-api-access-jr2bp\") pod \"infrawatch-operators-6xfg6\" (UID: \"72e2262b-0259-4a9e-b03c-cc4d3683bc44\") " pod="service-telemetry/infrawatch-operators-6xfg6" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.320212 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jr2bp\" (UniqueName: \"kubernetes.io/projected/72e2262b-0259-4a9e-b03c-cc4d3683bc44-kube-api-access-jr2bp\") pod \"infrawatch-operators-6xfg6\" (UID: \"72e2262b-0259-4a9e-b03c-cc4d3683bc44\") " pod="service-telemetry/infrawatch-operators-6xfg6" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.338470 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr2bp\" (UniqueName: \"kubernetes.io/projected/72e2262b-0259-4a9e-b03c-cc4d3683bc44-kube-api-access-jr2bp\") pod \"infrawatch-operators-6xfg6\" (UID: \"72e2262b-0259-4a9e-b03c-cc4d3683bc44\") " pod="service-telemetry/infrawatch-operators-6xfg6" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.510606 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-6xfg6" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.543040 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_23d9d81e-fbe2-4ef1-8052-7efaa819a311/git-clone/0.log" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.543091 5118 generic.go:358] "Generic (PLEG): container finished" podID="23d9d81e-fbe2-4ef1-8052-7efaa819a311" containerID="8aefe745f839c0ddfb52838c05bb498eecc29ee3342afe252361967e3dad22f6" exitCode=1 Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.543126 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"23d9d81e-fbe2-4ef1-8052-7efaa819a311","Type":"ContainerDied","Data":"8aefe745f839c0ddfb52838c05bb498eecc29ee3342afe252361967e3dad22f6"} Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.543155 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"23d9d81e-fbe2-4ef1-8052-7efaa819a311","Type":"ContainerDied","Data":"6b7b394c668cf54ad2945feb147b46bfe37642c852b45645d083c1822fa726ef"} Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.543172 5118 scope.go:117] "RemoveContainer" containerID="8aefe745f839c0ddfb52838c05bb498eecc29ee3342afe252361967e3dad22f6" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.543260 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.571918 5118 scope.go:117] "RemoveContainer" containerID="8aefe745f839c0ddfb52838c05bb498eecc29ee3342afe252361967e3dad22f6" Dec 08 19:42:59 crc kubenswrapper[5118]: E1208 19:42:59.572494 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aefe745f839c0ddfb52838c05bb498eecc29ee3342afe252361967e3dad22f6\": container with ID starting with 8aefe745f839c0ddfb52838c05bb498eecc29ee3342afe252361967e3dad22f6 not found: ID does not exist" containerID="8aefe745f839c0ddfb52838c05bb498eecc29ee3342afe252361967e3dad22f6" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.572553 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aefe745f839c0ddfb52838c05bb498eecc29ee3342afe252361967e3dad22f6"} err="failed to get container status \"8aefe745f839c0ddfb52838c05bb498eecc29ee3342afe252361967e3dad22f6\": rpc error: code = NotFound desc = could not find container \"8aefe745f839c0ddfb52838c05bb498eecc29ee3342afe252361967e3dad22f6\": container with ID starting with 8aefe745f839c0ddfb52838c05bb498eecc29ee3342afe252361967e3dad22f6 not found: ID does not exist" Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.591738 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.597593 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 08 19:42:59 crc kubenswrapper[5118]: I1208 19:42:59.737609 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-6xfg6"] Dec 08 19:42:59 crc kubenswrapper[5118]: E1208 19:42:59.813250 5118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 19:42:59 crc kubenswrapper[5118]: E1208 19:42:59.813422 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jr2bp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-6xfg6_service-telemetry(72e2262b-0259-4a9e-b03c-cc4d3683bc44): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 19:42:59 crc kubenswrapper[5118]: E1208 19:42:59.814619 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6xfg6" podUID="72e2262b-0259-4a9e-b03c-cc4d3683bc44" Dec 08 19:43:00 crc kubenswrapper[5118]: I1208 19:43:00.103569 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23d9d81e-fbe2-4ef1-8052-7efaa819a311" path="/var/lib/kubelet/pods/23d9d81e-fbe2-4ef1-8052-7efaa819a311/volumes" Dec 08 19:43:00 crc kubenswrapper[5118]: I1208 19:43:00.551396 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-6xfg6" event={"ID":"72e2262b-0259-4a9e-b03c-cc4d3683bc44","Type":"ContainerStarted","Data":"e36ed435f6f6c8d4c9d94ad5b6426600d73be9b08f147b0936e9d7b1ba75cab9"} Dec 08 19:43:00 crc kubenswrapper[5118]: E1208 19:43:00.552537 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6xfg6" podUID="72e2262b-0259-4a9e-b03c-cc4d3683bc44" Dec 08 19:43:01 crc kubenswrapper[5118]: E1208 19:43:01.561199 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6xfg6" podUID="72e2262b-0259-4a9e-b03c-cc4d3683bc44" Dec 08 19:43:04 crc kubenswrapper[5118]: I1208 19:43:04.768660 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-6xfg6"] Dec 08 19:43:04 crc kubenswrapper[5118]: I1208 19:43:04.998976 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-6xfg6" Dec 08 19:43:05 crc kubenswrapper[5118]: I1208 19:43:05.105733 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr2bp\" (UniqueName: \"kubernetes.io/projected/72e2262b-0259-4a9e-b03c-cc4d3683bc44-kube-api-access-jr2bp\") pod \"72e2262b-0259-4a9e-b03c-cc4d3683bc44\" (UID: \"72e2262b-0259-4a9e-b03c-cc4d3683bc44\") " Dec 08 19:43:05 crc kubenswrapper[5118]: I1208 19:43:05.110946 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72e2262b-0259-4a9e-b03c-cc4d3683bc44-kube-api-access-jr2bp" (OuterVolumeSpecName: "kube-api-access-jr2bp") pod "72e2262b-0259-4a9e-b03c-cc4d3683bc44" (UID: "72e2262b-0259-4a9e-b03c-cc4d3683bc44"). InnerVolumeSpecName "kube-api-access-jr2bp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:43:05 crc kubenswrapper[5118]: I1208 19:43:05.207645 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jr2bp\" (UniqueName: \"kubernetes.io/projected/72e2262b-0259-4a9e-b03c-cc4d3683bc44-kube-api-access-jr2bp\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:05 crc kubenswrapper[5118]: I1208 19:43:05.589655 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-lmd4w"] Dec 08 19:43:05 crc kubenswrapper[5118]: I1208 19:43:05.590892 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-6xfg6" Dec 08 19:43:05 crc kubenswrapper[5118]: I1208 19:43:05.607400 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-6xfg6" event={"ID":"72e2262b-0259-4a9e-b03c-cc4d3683bc44","Type":"ContainerDied","Data":"e36ed435f6f6c8d4c9d94ad5b6426600d73be9b08f147b0936e9d7b1ba75cab9"} Dec 08 19:43:05 crc kubenswrapper[5118]: I1208 19:43:05.607479 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-lmd4w"] Dec 08 19:43:05 crc kubenswrapper[5118]: I1208 19:43:05.607667 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-lmd4w" Dec 08 19:43:05 crc kubenswrapper[5118]: I1208 19:43:05.613116 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-vvj5z\"" Dec 08 19:43:05 crc kubenswrapper[5118]: I1208 19:43:05.653486 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-6xfg6"] Dec 08 19:43:05 crc kubenswrapper[5118]: I1208 19:43:05.658542 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-6xfg6"] Dec 08 19:43:05 crc kubenswrapper[5118]: I1208 19:43:05.714758 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6gkd\" (UniqueName: \"kubernetes.io/projected/5c9df676-377e-4cce-8389-95a81a2b54a0-kube-api-access-z6gkd\") pod \"infrawatch-operators-lmd4w\" (UID: \"5c9df676-377e-4cce-8389-95a81a2b54a0\") " pod="service-telemetry/infrawatch-operators-lmd4w" Dec 08 19:43:05 crc kubenswrapper[5118]: I1208 19:43:05.816865 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z6gkd\" (UniqueName: \"kubernetes.io/projected/5c9df676-377e-4cce-8389-95a81a2b54a0-kube-api-access-z6gkd\") pod \"infrawatch-operators-lmd4w\" (UID: \"5c9df676-377e-4cce-8389-95a81a2b54a0\") " pod="service-telemetry/infrawatch-operators-lmd4w" Dec 08 19:43:05 crc kubenswrapper[5118]: I1208 19:43:05.837618 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6gkd\" (UniqueName: \"kubernetes.io/projected/5c9df676-377e-4cce-8389-95a81a2b54a0-kube-api-access-z6gkd\") pod \"infrawatch-operators-lmd4w\" (UID: \"5c9df676-377e-4cce-8389-95a81a2b54a0\") " pod="service-telemetry/infrawatch-operators-lmd4w" Dec 08 19:43:05 crc kubenswrapper[5118]: I1208 19:43:05.931821 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-lmd4w" Dec 08 19:43:06 crc kubenswrapper[5118]: I1208 19:43:06.105647 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72e2262b-0259-4a9e-b03c-cc4d3683bc44" path="/var/lib/kubelet/pods/72e2262b-0259-4a9e-b03c-cc4d3683bc44/volumes" Dec 08 19:43:06 crc kubenswrapper[5118]: I1208 19:43:06.162400 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-lmd4w"] Dec 08 19:43:06 crc kubenswrapper[5118]: E1208 19:43:06.223102 5118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 19:43:06 crc kubenswrapper[5118]: E1208 19:43:06.223323 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6gkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-lmd4w_service-telemetry(5c9df676-377e-4cce-8389-95a81a2b54a0): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 19:43:06 crc kubenswrapper[5118]: E1208 19:43:06.224580 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:43:06 crc kubenswrapper[5118]: I1208 19:43:06.596426 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-lmd4w" event={"ID":"5c9df676-377e-4cce-8389-95a81a2b54a0","Type":"ContainerStarted","Data":"160fca259b570058a8515068748c8b9f7a452e8c0552e9d0f3a5122d1593c43a"} Dec 08 19:43:06 crc kubenswrapper[5118]: E1208 19:43:06.597329 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:43:07 crc kubenswrapper[5118]: E1208 19:43:07.603833 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:43:09 crc kubenswrapper[5118]: I1208 19:43:09.468021 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:43:09 crc kubenswrapper[5118]: I1208 19:43:09.468536 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:43:09 crc kubenswrapper[5118]: I1208 19:43:09.468637 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:43:09 crc kubenswrapper[5118]: I1208 19:43:09.470027 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d431454154fbcd4ebfcd3a345d3b257b49f1ea186ad3587cfb5ff74b16d0d0b8"} pod="openshift-machine-config-operator/machine-config-daemon-twnt9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:43:09 crc kubenswrapper[5118]: I1208 19:43:09.470189 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" containerID="cri-o://d431454154fbcd4ebfcd3a345d3b257b49f1ea186ad3587cfb5ff74b16d0d0b8" gracePeriod=600 Dec 08 19:43:09 crc kubenswrapper[5118]: I1208 19:43:09.618733 5118 generic.go:358] "Generic (PLEG): container finished" podID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerID="d431454154fbcd4ebfcd3a345d3b257b49f1ea186ad3587cfb5ff74b16d0d0b8" exitCode=0 Dec 08 19:43:09 crc kubenswrapper[5118]: I1208 19:43:09.618809 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" event={"ID":"0052f7cb-2eab-42e7-8f98-b1544811d9c3","Type":"ContainerDied","Data":"d431454154fbcd4ebfcd3a345d3b257b49f1ea186ad3587cfb5ff74b16d0d0b8"} Dec 08 19:43:09 crc kubenswrapper[5118]: I1208 19:43:09.618855 5118 scope.go:117] "RemoveContainer" containerID="9d9ec033c2d11bd8a4bc45cbc441ba68a7926e1d3c57f8675045fe5aa0fb6da7" Dec 08 19:43:10 crc kubenswrapper[5118]: I1208 19:43:10.626652 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" event={"ID":"0052f7cb-2eab-42e7-8f98-b1544811d9c3","Type":"ContainerStarted","Data":"5ea19603e4d1cffaf24b8b70ad009aa68dd73babbc033205c3239717229c12e2"} Dec 08 19:43:22 crc kubenswrapper[5118]: E1208 19:43:22.151493 5118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 19:43:22 crc kubenswrapper[5118]: E1208 19:43:22.152142 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6gkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-lmd4w_service-telemetry(5c9df676-377e-4cce-8389-95a81a2b54a0): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 19:43:22 crc kubenswrapper[5118]: E1208 19:43:22.153352 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:43:36 crc kubenswrapper[5118]: E1208 19:43:36.096947 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:43:51 crc kubenswrapper[5118]: E1208 19:43:51.163516 5118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 19:43:51 crc kubenswrapper[5118]: E1208 19:43:51.164220 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6gkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-lmd4w_service-telemetry(5c9df676-377e-4cce-8389-95a81a2b54a0): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 19:43:51 crc kubenswrapper[5118]: E1208 19:43:51.165401 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:44:06 crc kubenswrapper[5118]: E1208 19:44:06.109146 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:44:08 crc kubenswrapper[5118]: I1208 19:44:08.386027 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-j4b8g_1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742/kube-multus/0.log" Dec 08 19:44:08 crc kubenswrapper[5118]: I1208 19:44:08.388074 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-j4b8g_1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742/kube-multus/0.log" Dec 08 19:44:08 crc kubenswrapper[5118]: I1208 19:44:08.392403 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 19:44:08 crc kubenswrapper[5118]: I1208 19:44:08.394047 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 19:44:12 crc kubenswrapper[5118]: I1208 19:44:12.829256 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t6vrz"] Dec 08 19:44:12 crc kubenswrapper[5118]: I1208 19:44:12.938479 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t6vrz"] Dec 08 19:44:12 crc kubenswrapper[5118]: I1208 19:44:12.938633 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6vrz" Dec 08 19:44:13 crc kubenswrapper[5118]: I1208 19:44:13.078000 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32063997-d530-4340-b509-e997cf4030eb-catalog-content\") pod \"certified-operators-t6vrz\" (UID: \"32063997-d530-4340-b509-e997cf4030eb\") " pod="openshift-marketplace/certified-operators-t6vrz" Dec 08 19:44:13 crc kubenswrapper[5118]: I1208 19:44:13.078105 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32063997-d530-4340-b509-e997cf4030eb-utilities\") pod \"certified-operators-t6vrz\" (UID: \"32063997-d530-4340-b509-e997cf4030eb\") " pod="openshift-marketplace/certified-operators-t6vrz" Dec 08 19:44:13 crc kubenswrapper[5118]: I1208 19:44:13.078167 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqxzt\" (UniqueName: \"kubernetes.io/projected/32063997-d530-4340-b509-e997cf4030eb-kube-api-access-kqxzt\") pod \"certified-operators-t6vrz\" (UID: \"32063997-d530-4340-b509-e997cf4030eb\") " pod="openshift-marketplace/certified-operators-t6vrz" Dec 08 19:44:13 crc kubenswrapper[5118]: I1208 19:44:13.179918 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kqxzt\" (UniqueName: \"kubernetes.io/projected/32063997-d530-4340-b509-e997cf4030eb-kube-api-access-kqxzt\") pod \"certified-operators-t6vrz\" (UID: \"32063997-d530-4340-b509-e997cf4030eb\") " pod="openshift-marketplace/certified-operators-t6vrz" Dec 08 19:44:13 crc kubenswrapper[5118]: I1208 19:44:13.180005 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32063997-d530-4340-b509-e997cf4030eb-catalog-content\") pod \"certified-operators-t6vrz\" (UID: \"32063997-d530-4340-b509-e997cf4030eb\") " pod="openshift-marketplace/certified-operators-t6vrz" Dec 08 19:44:13 crc kubenswrapper[5118]: I1208 19:44:13.180080 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32063997-d530-4340-b509-e997cf4030eb-utilities\") pod \"certified-operators-t6vrz\" (UID: \"32063997-d530-4340-b509-e997cf4030eb\") " pod="openshift-marketplace/certified-operators-t6vrz" Dec 08 19:44:13 crc kubenswrapper[5118]: I1208 19:44:13.180543 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32063997-d530-4340-b509-e997cf4030eb-catalog-content\") pod \"certified-operators-t6vrz\" (UID: \"32063997-d530-4340-b509-e997cf4030eb\") " pod="openshift-marketplace/certified-operators-t6vrz" Dec 08 19:44:13 crc kubenswrapper[5118]: I1208 19:44:13.180628 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32063997-d530-4340-b509-e997cf4030eb-utilities\") pod \"certified-operators-t6vrz\" (UID: \"32063997-d530-4340-b509-e997cf4030eb\") " pod="openshift-marketplace/certified-operators-t6vrz" Dec 08 19:44:13 crc kubenswrapper[5118]: I1208 19:44:13.199794 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqxzt\" (UniqueName: \"kubernetes.io/projected/32063997-d530-4340-b509-e997cf4030eb-kube-api-access-kqxzt\") pod \"certified-operators-t6vrz\" (UID: \"32063997-d530-4340-b509-e997cf4030eb\") " pod="openshift-marketplace/certified-operators-t6vrz" Dec 08 19:44:13 crc kubenswrapper[5118]: I1208 19:44:13.256935 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6vrz" Dec 08 19:44:13 crc kubenswrapper[5118]: I1208 19:44:13.563118 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t6vrz"] Dec 08 19:44:14 crc kubenswrapper[5118]: I1208 19:44:14.074983 5118 generic.go:358] "Generic (PLEG): container finished" podID="32063997-d530-4340-b509-e997cf4030eb" containerID="be2aa4569cff77cd28a5a7613dfc30bc52d3c4e1e81715cd2fb97915fb56a33f" exitCode=0 Dec 08 19:44:14 crc kubenswrapper[5118]: I1208 19:44:14.075097 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6vrz" event={"ID":"32063997-d530-4340-b509-e997cf4030eb","Type":"ContainerDied","Data":"be2aa4569cff77cd28a5a7613dfc30bc52d3c4e1e81715cd2fb97915fb56a33f"} Dec 08 19:44:14 crc kubenswrapper[5118]: I1208 19:44:14.075502 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6vrz" event={"ID":"32063997-d530-4340-b509-e997cf4030eb","Type":"ContainerStarted","Data":"1515215195c31d7a52795e70ba1dd402ee30e592187c4417e57b89a8da11909a"} Dec 08 19:44:15 crc kubenswrapper[5118]: I1208 19:44:15.087240 5118 generic.go:358] "Generic (PLEG): container finished" podID="32063997-d530-4340-b509-e997cf4030eb" containerID="7fee0c04d3964ff64f28c0e712016662daf2055e8c3bc89e2caf89de61b2b654" exitCode=0 Dec 08 19:44:15 crc kubenswrapper[5118]: I1208 19:44:15.087314 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6vrz" event={"ID":"32063997-d530-4340-b509-e997cf4030eb","Type":"ContainerDied","Data":"7fee0c04d3964ff64f28c0e712016662daf2055e8c3bc89e2caf89de61b2b654"} Dec 08 19:44:15 crc kubenswrapper[5118]: I1208 19:44:15.617102 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vtqtw"] Dec 08 19:44:15 crc kubenswrapper[5118]: I1208 19:44:15.625297 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vtqtw" Dec 08 19:44:15 crc kubenswrapper[5118]: I1208 19:44:15.633478 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vtqtw"] Dec 08 19:44:15 crc kubenswrapper[5118]: I1208 19:44:15.640164 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hqs7\" (UniqueName: \"kubernetes.io/projected/3b4811a0-5c70-4041-a991-559c5b4e0f00-kube-api-access-4hqs7\") pod \"community-operators-vtqtw\" (UID: \"3b4811a0-5c70-4041-a991-559c5b4e0f00\") " pod="openshift-marketplace/community-operators-vtqtw" Dec 08 19:44:15 crc kubenswrapper[5118]: I1208 19:44:15.640250 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b4811a0-5c70-4041-a991-559c5b4e0f00-utilities\") pod \"community-operators-vtqtw\" (UID: \"3b4811a0-5c70-4041-a991-559c5b4e0f00\") " pod="openshift-marketplace/community-operators-vtqtw" Dec 08 19:44:15 crc kubenswrapper[5118]: I1208 19:44:15.640275 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b4811a0-5c70-4041-a991-559c5b4e0f00-catalog-content\") pod \"community-operators-vtqtw\" (UID: \"3b4811a0-5c70-4041-a991-559c5b4e0f00\") " pod="openshift-marketplace/community-operators-vtqtw" Dec 08 19:44:15 crc kubenswrapper[5118]: I1208 19:44:15.741183 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b4811a0-5c70-4041-a991-559c5b4e0f00-utilities\") pod \"community-operators-vtqtw\" (UID: \"3b4811a0-5c70-4041-a991-559c5b4e0f00\") " pod="openshift-marketplace/community-operators-vtqtw" Dec 08 19:44:15 crc kubenswrapper[5118]: I1208 19:44:15.741506 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b4811a0-5c70-4041-a991-559c5b4e0f00-catalog-content\") pod \"community-operators-vtqtw\" (UID: \"3b4811a0-5c70-4041-a991-559c5b4e0f00\") " pod="openshift-marketplace/community-operators-vtqtw" Dec 08 19:44:15 crc kubenswrapper[5118]: I1208 19:44:15.741653 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4hqs7\" (UniqueName: \"kubernetes.io/projected/3b4811a0-5c70-4041-a991-559c5b4e0f00-kube-api-access-4hqs7\") pod \"community-operators-vtqtw\" (UID: \"3b4811a0-5c70-4041-a991-559c5b4e0f00\") " pod="openshift-marketplace/community-operators-vtqtw" Dec 08 19:44:15 crc kubenswrapper[5118]: I1208 19:44:15.741713 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b4811a0-5c70-4041-a991-559c5b4e0f00-utilities\") pod \"community-operators-vtqtw\" (UID: \"3b4811a0-5c70-4041-a991-559c5b4e0f00\") " pod="openshift-marketplace/community-operators-vtqtw" Dec 08 19:44:15 crc kubenswrapper[5118]: I1208 19:44:15.742055 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b4811a0-5c70-4041-a991-559c5b4e0f00-catalog-content\") pod \"community-operators-vtqtw\" (UID: \"3b4811a0-5c70-4041-a991-559c5b4e0f00\") " pod="openshift-marketplace/community-operators-vtqtw" Dec 08 19:44:15 crc kubenswrapper[5118]: I1208 19:44:15.766191 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hqs7\" (UniqueName: \"kubernetes.io/projected/3b4811a0-5c70-4041-a991-559c5b4e0f00-kube-api-access-4hqs7\") pod \"community-operators-vtqtw\" (UID: \"3b4811a0-5c70-4041-a991-559c5b4e0f00\") " pod="openshift-marketplace/community-operators-vtqtw" Dec 08 19:44:15 crc kubenswrapper[5118]: I1208 19:44:15.968338 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vtqtw" Dec 08 19:44:16 crc kubenswrapper[5118]: I1208 19:44:16.124203 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6vrz" event={"ID":"32063997-d530-4340-b509-e997cf4030eb","Type":"ContainerStarted","Data":"39ed2d2e08710b76853f354090aee82ddb705dfedddaeb3457f4e48c1a488266"} Dec 08 19:44:16 crc kubenswrapper[5118]: I1208 19:44:16.140665 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t6vrz" podStartSLOduration=3.466284726 podStartE2EDuration="4.140649077s" podCreationTimestamp="2025-12-08 19:44:12 +0000 UTC" firstStartedPulling="2025-12-08 19:44:14.076884768 +0000 UTC m=+906.369730265" lastFinishedPulling="2025-12-08 19:44:14.751249149 +0000 UTC m=+907.044094616" observedRunningTime="2025-12-08 19:44:16.134835519 +0000 UTC m=+908.427680996" watchObservedRunningTime="2025-12-08 19:44:16.140649077 +0000 UTC m=+908.433494534" Dec 08 19:44:16 crc kubenswrapper[5118]: I1208 19:44:16.425522 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vtqtw"] Dec 08 19:44:17 crc kubenswrapper[5118]: I1208 19:44:17.118148 5118 generic.go:358] "Generic (PLEG): container finished" podID="3b4811a0-5c70-4041-a991-559c5b4e0f00" containerID="397d7c040b83b162e8beba7f23b549f9ef9be106de6acb730009cb72f7a7b431" exitCode=0 Dec 08 19:44:17 crc kubenswrapper[5118]: I1208 19:44:17.118889 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vtqtw" event={"ID":"3b4811a0-5c70-4041-a991-559c5b4e0f00","Type":"ContainerDied","Data":"397d7c040b83b162e8beba7f23b549f9ef9be106de6acb730009cb72f7a7b431"} Dec 08 19:44:17 crc kubenswrapper[5118]: I1208 19:44:17.118931 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vtqtw" event={"ID":"3b4811a0-5c70-4041-a991-559c5b4e0f00","Type":"ContainerStarted","Data":"7d60a3630dadfee3cce2543c74cd15096da818cb8b2f10ec5cb8b6f9ac71a5b1"} Dec 08 19:44:18 crc kubenswrapper[5118]: E1208 19:44:18.102632 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:44:18 crc kubenswrapper[5118]: I1208 19:44:18.127862 5118 generic.go:358] "Generic (PLEG): container finished" podID="3b4811a0-5c70-4041-a991-559c5b4e0f00" containerID="2bcd0c0bedc05cb37ec3f31e18f0dfd48d3921fc4a220b0ccd8312d2b0043353" exitCode=0 Dec 08 19:44:18 crc kubenswrapper[5118]: I1208 19:44:18.127998 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vtqtw" event={"ID":"3b4811a0-5c70-4041-a991-559c5b4e0f00","Type":"ContainerDied","Data":"2bcd0c0bedc05cb37ec3f31e18f0dfd48d3921fc4a220b0ccd8312d2b0043353"} Dec 08 19:44:19 crc kubenswrapper[5118]: I1208 19:44:19.137231 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vtqtw" event={"ID":"3b4811a0-5c70-4041-a991-559c5b4e0f00","Type":"ContainerStarted","Data":"1d5c334e0c34056829aad5d9992287a057a5c0c92d462bc2ba9627a6842cf0b6"} Dec 08 19:44:23 crc kubenswrapper[5118]: I1208 19:44:23.258110 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t6vrz" Dec 08 19:44:23 crc kubenswrapper[5118]: I1208 19:44:23.258415 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-t6vrz" Dec 08 19:44:23 crc kubenswrapper[5118]: I1208 19:44:23.310829 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t6vrz" Dec 08 19:44:23 crc kubenswrapper[5118]: I1208 19:44:23.332171 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vtqtw" podStartSLOduration=7.715453496 podStartE2EDuration="8.332148035s" podCreationTimestamp="2025-12-08 19:44:15 +0000 UTC" firstStartedPulling="2025-12-08 19:44:17.119462655 +0000 UTC m=+909.412308112" lastFinishedPulling="2025-12-08 19:44:17.736157184 +0000 UTC m=+910.029002651" observedRunningTime="2025-12-08 19:44:19.15660849 +0000 UTC m=+911.449453977" watchObservedRunningTime="2025-12-08 19:44:23.332148035 +0000 UTC m=+915.624993532" Dec 08 19:44:24 crc kubenswrapper[5118]: I1208 19:44:24.212092 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t6vrz" Dec 08 19:44:24 crc kubenswrapper[5118]: I1208 19:44:24.250924 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t6vrz"] Dec 08 19:44:25 crc kubenswrapper[5118]: I1208 19:44:25.968906 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-vtqtw" Dec 08 19:44:25 crc kubenswrapper[5118]: I1208 19:44:25.968973 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vtqtw" Dec 08 19:44:26 crc kubenswrapper[5118]: I1208 19:44:26.023291 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vtqtw" Dec 08 19:44:26 crc kubenswrapper[5118]: I1208 19:44:26.180045 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t6vrz" podUID="32063997-d530-4340-b509-e997cf4030eb" containerName="registry-server" containerID="cri-o://39ed2d2e08710b76853f354090aee82ddb705dfedddaeb3457f4e48c1a488266" gracePeriod=2 Dec 08 19:44:26 crc kubenswrapper[5118]: I1208 19:44:26.229676 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vtqtw" Dec 08 19:44:26 crc kubenswrapper[5118]: I1208 19:44:26.945006 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vtqtw"] Dec 08 19:44:27 crc kubenswrapper[5118]: I1208 19:44:27.189764 5118 generic.go:358] "Generic (PLEG): container finished" podID="32063997-d530-4340-b509-e997cf4030eb" containerID="39ed2d2e08710b76853f354090aee82ddb705dfedddaeb3457f4e48c1a488266" exitCode=0 Dec 08 19:44:27 crc kubenswrapper[5118]: I1208 19:44:27.189829 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6vrz" event={"ID":"32063997-d530-4340-b509-e997cf4030eb","Type":"ContainerDied","Data":"39ed2d2e08710b76853f354090aee82ddb705dfedddaeb3457f4e48c1a488266"} Dec 08 19:44:27 crc kubenswrapper[5118]: I1208 19:44:27.724350 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6vrz" Dec 08 19:44:27 crc kubenswrapper[5118]: I1208 19:44:27.805607 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32063997-d530-4340-b509-e997cf4030eb-utilities\") pod \"32063997-d530-4340-b509-e997cf4030eb\" (UID: \"32063997-d530-4340-b509-e997cf4030eb\") " Dec 08 19:44:27 crc kubenswrapper[5118]: I1208 19:44:27.805665 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32063997-d530-4340-b509-e997cf4030eb-catalog-content\") pod \"32063997-d530-4340-b509-e997cf4030eb\" (UID: \"32063997-d530-4340-b509-e997cf4030eb\") " Dec 08 19:44:27 crc kubenswrapper[5118]: I1208 19:44:27.805724 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqxzt\" (UniqueName: \"kubernetes.io/projected/32063997-d530-4340-b509-e997cf4030eb-kube-api-access-kqxzt\") pod \"32063997-d530-4340-b509-e997cf4030eb\" (UID: \"32063997-d530-4340-b509-e997cf4030eb\") " Dec 08 19:44:27 crc kubenswrapper[5118]: I1208 19:44:27.806629 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32063997-d530-4340-b509-e997cf4030eb-utilities" (OuterVolumeSpecName: "utilities") pod "32063997-d530-4340-b509-e997cf4030eb" (UID: "32063997-d530-4340-b509-e997cf4030eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:44:27 crc kubenswrapper[5118]: I1208 19:44:27.816436 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32063997-d530-4340-b509-e997cf4030eb-kube-api-access-kqxzt" (OuterVolumeSpecName: "kube-api-access-kqxzt") pod "32063997-d530-4340-b509-e997cf4030eb" (UID: "32063997-d530-4340-b509-e997cf4030eb"). InnerVolumeSpecName "kube-api-access-kqxzt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:44:27 crc kubenswrapper[5118]: I1208 19:44:27.841072 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32063997-d530-4340-b509-e997cf4030eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32063997-d530-4340-b509-e997cf4030eb" (UID: "32063997-d530-4340-b509-e997cf4030eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:44:27 crc kubenswrapper[5118]: I1208 19:44:27.907337 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32063997-d530-4340-b509-e997cf4030eb-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:44:27 crc kubenswrapper[5118]: I1208 19:44:27.907380 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32063997-d530-4340-b509-e997cf4030eb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:44:27 crc kubenswrapper[5118]: I1208 19:44:27.907394 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kqxzt\" (UniqueName: \"kubernetes.io/projected/32063997-d530-4340-b509-e997cf4030eb-kube-api-access-kqxzt\") on node \"crc\" DevicePath \"\"" Dec 08 19:44:28 crc kubenswrapper[5118]: I1208 19:44:28.200003 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6vrz" Dec 08 19:44:28 crc kubenswrapper[5118]: I1208 19:44:28.200062 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6vrz" event={"ID":"32063997-d530-4340-b509-e997cf4030eb","Type":"ContainerDied","Data":"1515215195c31d7a52795e70ba1dd402ee30e592187c4417e57b89a8da11909a"} Dec 08 19:44:28 crc kubenswrapper[5118]: I1208 19:44:28.200121 5118 scope.go:117] "RemoveContainer" containerID="39ed2d2e08710b76853f354090aee82ddb705dfedddaeb3457f4e48c1a488266" Dec 08 19:44:28 crc kubenswrapper[5118]: I1208 19:44:28.200728 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vtqtw" podUID="3b4811a0-5c70-4041-a991-559c5b4e0f00" containerName="registry-server" containerID="cri-o://1d5c334e0c34056829aad5d9992287a057a5c0c92d462bc2ba9627a6842cf0b6" gracePeriod=2 Dec 08 19:44:28 crc kubenswrapper[5118]: I1208 19:44:28.226335 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t6vrz"] Dec 08 19:44:28 crc kubenswrapper[5118]: I1208 19:44:28.230736 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t6vrz"] Dec 08 19:44:28 crc kubenswrapper[5118]: I1208 19:44:28.235723 5118 scope.go:117] "RemoveContainer" containerID="7fee0c04d3964ff64f28c0e712016662daf2055e8c3bc89e2caf89de61b2b654" Dec 08 19:44:28 crc kubenswrapper[5118]: I1208 19:44:28.251024 5118 scope.go:117] "RemoveContainer" containerID="be2aa4569cff77cd28a5a7613dfc30bc52d3c4e1e81715cd2fb97915fb56a33f" Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.056390 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vtqtw" Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.125557 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b4811a0-5c70-4041-a991-559c5b4e0f00-utilities\") pod \"3b4811a0-5c70-4041-a991-559c5b4e0f00\" (UID: \"3b4811a0-5c70-4041-a991-559c5b4e0f00\") " Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.125649 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b4811a0-5c70-4041-a991-559c5b4e0f00-catalog-content\") pod \"3b4811a0-5c70-4041-a991-559c5b4e0f00\" (UID: \"3b4811a0-5c70-4041-a991-559c5b4e0f00\") " Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.125762 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hqs7\" (UniqueName: \"kubernetes.io/projected/3b4811a0-5c70-4041-a991-559c5b4e0f00-kube-api-access-4hqs7\") pod \"3b4811a0-5c70-4041-a991-559c5b4e0f00\" (UID: \"3b4811a0-5c70-4041-a991-559c5b4e0f00\") " Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.126876 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b4811a0-5c70-4041-a991-559c5b4e0f00-utilities" (OuterVolumeSpecName: "utilities") pod "3b4811a0-5c70-4041-a991-559c5b4e0f00" (UID: "3b4811a0-5c70-4041-a991-559c5b4e0f00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.134464 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b4811a0-5c70-4041-a991-559c5b4e0f00-kube-api-access-4hqs7" (OuterVolumeSpecName: "kube-api-access-4hqs7") pod "3b4811a0-5c70-4041-a991-559c5b4e0f00" (UID: "3b4811a0-5c70-4041-a991-559c5b4e0f00"). InnerVolumeSpecName "kube-api-access-4hqs7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.176341 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b4811a0-5c70-4041-a991-559c5b4e0f00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b4811a0-5c70-4041-a991-559c5b4e0f00" (UID: "3b4811a0-5c70-4041-a991-559c5b4e0f00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.208022 5118 generic.go:358] "Generic (PLEG): container finished" podID="3b4811a0-5c70-4041-a991-559c5b4e0f00" containerID="1d5c334e0c34056829aad5d9992287a057a5c0c92d462bc2ba9627a6842cf0b6" exitCode=0 Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.208152 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vtqtw" event={"ID":"3b4811a0-5c70-4041-a991-559c5b4e0f00","Type":"ContainerDied","Data":"1d5c334e0c34056829aad5d9992287a057a5c0c92d462bc2ba9627a6842cf0b6"} Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.208178 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vtqtw" event={"ID":"3b4811a0-5c70-4041-a991-559c5b4e0f00","Type":"ContainerDied","Data":"7d60a3630dadfee3cce2543c74cd15096da818cb8b2f10ec5cb8b6f9ac71a5b1"} Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.208193 5118 scope.go:117] "RemoveContainer" containerID="1d5c334e0c34056829aad5d9992287a057a5c0c92d462bc2ba9627a6842cf0b6" Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.208284 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vtqtw" Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.227529 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b4811a0-5c70-4041-a991-559c5b4e0f00-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.227566 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b4811a0-5c70-4041-a991-559c5b4e0f00-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.227581 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hqs7\" (UniqueName: \"kubernetes.io/projected/3b4811a0-5c70-4041-a991-559c5b4e0f00-kube-api-access-4hqs7\") on node \"crc\" DevicePath \"\"" Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.230642 5118 scope.go:117] "RemoveContainer" containerID="2bcd0c0bedc05cb37ec3f31e18f0dfd48d3921fc4a220b0ccd8312d2b0043353" Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.246405 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vtqtw"] Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.251505 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vtqtw"] Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.269765 5118 scope.go:117] "RemoveContainer" containerID="397d7c040b83b162e8beba7f23b549f9ef9be106de6acb730009cb72f7a7b431" Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.288571 5118 scope.go:117] "RemoveContainer" containerID="1d5c334e0c34056829aad5d9992287a057a5c0c92d462bc2ba9627a6842cf0b6" Dec 08 19:44:29 crc kubenswrapper[5118]: E1208 19:44:29.288997 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d5c334e0c34056829aad5d9992287a057a5c0c92d462bc2ba9627a6842cf0b6\": container with ID starting with 1d5c334e0c34056829aad5d9992287a057a5c0c92d462bc2ba9627a6842cf0b6 not found: ID does not exist" containerID="1d5c334e0c34056829aad5d9992287a057a5c0c92d462bc2ba9627a6842cf0b6" Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.289054 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d5c334e0c34056829aad5d9992287a057a5c0c92d462bc2ba9627a6842cf0b6"} err="failed to get container status \"1d5c334e0c34056829aad5d9992287a057a5c0c92d462bc2ba9627a6842cf0b6\": rpc error: code = NotFound desc = could not find container \"1d5c334e0c34056829aad5d9992287a057a5c0c92d462bc2ba9627a6842cf0b6\": container with ID starting with 1d5c334e0c34056829aad5d9992287a057a5c0c92d462bc2ba9627a6842cf0b6 not found: ID does not exist" Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.289086 5118 scope.go:117] "RemoveContainer" containerID="2bcd0c0bedc05cb37ec3f31e18f0dfd48d3921fc4a220b0ccd8312d2b0043353" Dec 08 19:44:29 crc kubenswrapper[5118]: E1208 19:44:29.289360 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bcd0c0bedc05cb37ec3f31e18f0dfd48d3921fc4a220b0ccd8312d2b0043353\": container with ID starting with 2bcd0c0bedc05cb37ec3f31e18f0dfd48d3921fc4a220b0ccd8312d2b0043353 not found: ID does not exist" containerID="2bcd0c0bedc05cb37ec3f31e18f0dfd48d3921fc4a220b0ccd8312d2b0043353" Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.289387 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bcd0c0bedc05cb37ec3f31e18f0dfd48d3921fc4a220b0ccd8312d2b0043353"} err="failed to get container status \"2bcd0c0bedc05cb37ec3f31e18f0dfd48d3921fc4a220b0ccd8312d2b0043353\": rpc error: code = NotFound desc = could not find container \"2bcd0c0bedc05cb37ec3f31e18f0dfd48d3921fc4a220b0ccd8312d2b0043353\": container with ID starting with 2bcd0c0bedc05cb37ec3f31e18f0dfd48d3921fc4a220b0ccd8312d2b0043353 not found: ID does not exist" Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.289407 5118 scope.go:117] "RemoveContainer" containerID="397d7c040b83b162e8beba7f23b549f9ef9be106de6acb730009cb72f7a7b431" Dec 08 19:44:29 crc kubenswrapper[5118]: E1208 19:44:29.289661 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"397d7c040b83b162e8beba7f23b549f9ef9be106de6acb730009cb72f7a7b431\": container with ID starting with 397d7c040b83b162e8beba7f23b549f9ef9be106de6acb730009cb72f7a7b431 not found: ID does not exist" containerID="397d7c040b83b162e8beba7f23b549f9ef9be106de6acb730009cb72f7a7b431" Dec 08 19:44:29 crc kubenswrapper[5118]: I1208 19:44:29.289729 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"397d7c040b83b162e8beba7f23b549f9ef9be106de6acb730009cb72f7a7b431"} err="failed to get container status \"397d7c040b83b162e8beba7f23b549f9ef9be106de6acb730009cb72f7a7b431\": rpc error: code = NotFound desc = could not find container \"397d7c040b83b162e8beba7f23b549f9ef9be106de6acb730009cb72f7a7b431\": container with ID starting with 397d7c040b83b162e8beba7f23b549f9ef9be106de6acb730009cb72f7a7b431 not found: ID does not exist" Dec 08 19:44:30 crc kubenswrapper[5118]: I1208 19:44:30.105540 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32063997-d530-4340-b509-e997cf4030eb" path="/var/lib/kubelet/pods/32063997-d530-4340-b509-e997cf4030eb/volumes" Dec 08 19:44:30 crc kubenswrapper[5118]: I1208 19:44:30.106173 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b4811a0-5c70-4041-a991-559c5b4e0f00" path="/var/lib/kubelet/pods/3b4811a0-5c70-4041-a991-559c5b4e0f00/volumes" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.097177 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 19:44:33 crc kubenswrapper[5118]: E1208 19:44:33.148173 5118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 19:44:33 crc kubenswrapper[5118]: E1208 19:44:33.148361 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6gkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-lmd4w_service-telemetry(5c9df676-377e-4cce-8389-95a81a2b54a0): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 19:44:33 crc kubenswrapper[5118]: E1208 19:44:33.149708 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.343898 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-65nsv"] Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.344452 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="32063997-d530-4340-b509-e997cf4030eb" containerName="extract-utilities" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.344469 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="32063997-d530-4340-b509-e997cf4030eb" containerName="extract-utilities" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.344492 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3b4811a0-5c70-4041-a991-559c5b4e0f00" containerName="extract-content" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.344498 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4811a0-5c70-4041-a991-559c5b4e0f00" containerName="extract-content" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.344514 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3b4811a0-5c70-4041-a991-559c5b4e0f00" containerName="extract-utilities" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.344520 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4811a0-5c70-4041-a991-559c5b4e0f00" containerName="extract-utilities" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.344529 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3b4811a0-5c70-4041-a991-559c5b4e0f00" containerName="registry-server" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.344535 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4811a0-5c70-4041-a991-559c5b4e0f00" containerName="registry-server" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.344545 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="32063997-d530-4340-b509-e997cf4030eb" containerName="extract-content" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.344550 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="32063997-d530-4340-b509-e997cf4030eb" containerName="extract-content" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.344564 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="32063997-d530-4340-b509-e997cf4030eb" containerName="registry-server" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.344571 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="32063997-d530-4340-b509-e997cf4030eb" containerName="registry-server" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.344656 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="32063997-d530-4340-b509-e997cf4030eb" containerName="registry-server" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.344670 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3b4811a0-5c70-4041-a991-559c5b4e0f00" containerName="registry-server" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.367040 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-65nsv"] Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.367182 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65nsv" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.488357 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd575cf-7cea-4577-a8f2-c6b048e4c818-catalog-content\") pod \"redhat-operators-65nsv\" (UID: \"7fd575cf-7cea-4577-a8f2-c6b048e4c818\") " pod="openshift-marketplace/redhat-operators-65nsv" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.488418 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9s2f\" (UniqueName: \"kubernetes.io/projected/7fd575cf-7cea-4577-a8f2-c6b048e4c818-kube-api-access-p9s2f\") pod \"redhat-operators-65nsv\" (UID: \"7fd575cf-7cea-4577-a8f2-c6b048e4c818\") " pod="openshift-marketplace/redhat-operators-65nsv" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.488516 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd575cf-7cea-4577-a8f2-c6b048e4c818-utilities\") pod \"redhat-operators-65nsv\" (UID: \"7fd575cf-7cea-4577-a8f2-c6b048e4c818\") " pod="openshift-marketplace/redhat-operators-65nsv" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.590279 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd575cf-7cea-4577-a8f2-c6b048e4c818-catalog-content\") pod \"redhat-operators-65nsv\" (UID: \"7fd575cf-7cea-4577-a8f2-c6b048e4c818\") " pod="openshift-marketplace/redhat-operators-65nsv" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.590346 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p9s2f\" (UniqueName: \"kubernetes.io/projected/7fd575cf-7cea-4577-a8f2-c6b048e4c818-kube-api-access-p9s2f\") pod \"redhat-operators-65nsv\" (UID: \"7fd575cf-7cea-4577-a8f2-c6b048e4c818\") " pod="openshift-marketplace/redhat-operators-65nsv" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.590398 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd575cf-7cea-4577-a8f2-c6b048e4c818-utilities\") pod \"redhat-operators-65nsv\" (UID: \"7fd575cf-7cea-4577-a8f2-c6b048e4c818\") " pod="openshift-marketplace/redhat-operators-65nsv" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.590900 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd575cf-7cea-4577-a8f2-c6b048e4c818-catalog-content\") pod \"redhat-operators-65nsv\" (UID: \"7fd575cf-7cea-4577-a8f2-c6b048e4c818\") " pod="openshift-marketplace/redhat-operators-65nsv" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.591016 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd575cf-7cea-4577-a8f2-c6b048e4c818-utilities\") pod \"redhat-operators-65nsv\" (UID: \"7fd575cf-7cea-4577-a8f2-c6b048e4c818\") " pod="openshift-marketplace/redhat-operators-65nsv" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.620338 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9s2f\" (UniqueName: \"kubernetes.io/projected/7fd575cf-7cea-4577-a8f2-c6b048e4c818-kube-api-access-p9s2f\") pod \"redhat-operators-65nsv\" (UID: \"7fd575cf-7cea-4577-a8f2-c6b048e4c818\") " pod="openshift-marketplace/redhat-operators-65nsv" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.682333 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65nsv" Dec 08 19:44:33 crc kubenswrapper[5118]: I1208 19:44:33.908053 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-65nsv"] Dec 08 19:44:34 crc kubenswrapper[5118]: I1208 19:44:34.239901 5118 generic.go:358] "Generic (PLEG): container finished" podID="7fd575cf-7cea-4577-a8f2-c6b048e4c818" containerID="692d00eb59277023420ce513dba5a4d4dfba50c0c5d184f474d1886b7f630692" exitCode=0 Dec 08 19:44:34 crc kubenswrapper[5118]: I1208 19:44:34.239966 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65nsv" event={"ID":"7fd575cf-7cea-4577-a8f2-c6b048e4c818","Type":"ContainerDied","Data":"692d00eb59277023420ce513dba5a4d4dfba50c0c5d184f474d1886b7f630692"} Dec 08 19:44:34 crc kubenswrapper[5118]: I1208 19:44:34.241324 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65nsv" event={"ID":"7fd575cf-7cea-4577-a8f2-c6b048e4c818","Type":"ContainerStarted","Data":"690f179df5476555df44ec49e2e98eae280af0a1bf06671992c5f11133f18b9a"} Dec 08 19:44:36 crc kubenswrapper[5118]: I1208 19:44:36.275951 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65nsv" event={"ID":"7fd575cf-7cea-4577-a8f2-c6b048e4c818","Type":"ContainerStarted","Data":"5243b0183cfc8859208b438632acfccd2dcd08b873776a725f9872992775b91f"} Dec 08 19:44:37 crc kubenswrapper[5118]: I1208 19:44:37.284390 5118 generic.go:358] "Generic (PLEG): container finished" podID="7fd575cf-7cea-4577-a8f2-c6b048e4c818" containerID="5243b0183cfc8859208b438632acfccd2dcd08b873776a725f9872992775b91f" exitCode=0 Dec 08 19:44:37 crc kubenswrapper[5118]: I1208 19:44:37.284569 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65nsv" event={"ID":"7fd575cf-7cea-4577-a8f2-c6b048e4c818","Type":"ContainerDied","Data":"5243b0183cfc8859208b438632acfccd2dcd08b873776a725f9872992775b91f"} Dec 08 19:44:38 crc kubenswrapper[5118]: I1208 19:44:38.294631 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65nsv" event={"ID":"7fd575cf-7cea-4577-a8f2-c6b048e4c818","Type":"ContainerStarted","Data":"2458242a667f0368743ca44fddb6382e8b20f1662759d9de40bac2357d2c32b1"} Dec 08 19:44:38 crc kubenswrapper[5118]: I1208 19:44:38.312043 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-65nsv" podStartSLOduration=4.221516587 podStartE2EDuration="5.312016199s" podCreationTimestamp="2025-12-08 19:44:33 +0000 UTC" firstStartedPulling="2025-12-08 19:44:34.240951959 +0000 UTC m=+926.533797416" lastFinishedPulling="2025-12-08 19:44:35.331451571 +0000 UTC m=+927.624297028" observedRunningTime="2025-12-08 19:44:38.309269884 +0000 UTC m=+930.602115351" watchObservedRunningTime="2025-12-08 19:44:38.312016199 +0000 UTC m=+930.604861656" Dec 08 19:44:43 crc kubenswrapper[5118]: I1208 19:44:43.683124 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-65nsv" Dec 08 19:44:43 crc kubenswrapper[5118]: I1208 19:44:43.684243 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-65nsv" Dec 08 19:44:43 crc kubenswrapper[5118]: I1208 19:44:43.719297 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-65nsv" Dec 08 19:44:44 crc kubenswrapper[5118]: I1208 19:44:44.365776 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-65nsv" Dec 08 19:44:44 crc kubenswrapper[5118]: I1208 19:44:44.404641 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-65nsv"] Dec 08 19:44:45 crc kubenswrapper[5118]: E1208 19:44:45.097158 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:44:46 crc kubenswrapper[5118]: I1208 19:44:46.340287 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-65nsv" podUID="7fd575cf-7cea-4577-a8f2-c6b048e4c818" containerName="registry-server" containerID="cri-o://2458242a667f0368743ca44fddb6382e8b20f1662759d9de40bac2357d2c32b1" gracePeriod=2 Dec 08 19:44:49 crc kubenswrapper[5118]: I1208 19:44:49.365845 5118 generic.go:358] "Generic (PLEG): container finished" podID="7fd575cf-7cea-4577-a8f2-c6b048e4c818" containerID="2458242a667f0368743ca44fddb6382e8b20f1662759d9de40bac2357d2c32b1" exitCode=0 Dec 08 19:44:49 crc kubenswrapper[5118]: I1208 19:44:49.365929 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65nsv" event={"ID":"7fd575cf-7cea-4577-a8f2-c6b048e4c818","Type":"ContainerDied","Data":"2458242a667f0368743ca44fddb6382e8b20f1662759d9de40bac2357d2c32b1"} Dec 08 19:44:50 crc kubenswrapper[5118]: I1208 19:44:50.377449 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65nsv" event={"ID":"7fd575cf-7cea-4577-a8f2-c6b048e4c818","Type":"ContainerDied","Data":"690f179df5476555df44ec49e2e98eae280af0a1bf06671992c5f11133f18b9a"} Dec 08 19:44:50 crc kubenswrapper[5118]: I1208 19:44:50.377489 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="690f179df5476555df44ec49e2e98eae280af0a1bf06671992c5f11133f18b9a" Dec 08 19:44:50 crc kubenswrapper[5118]: I1208 19:44:50.403792 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65nsv" Dec 08 19:44:50 crc kubenswrapper[5118]: I1208 19:44:50.525302 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9s2f\" (UniqueName: \"kubernetes.io/projected/7fd575cf-7cea-4577-a8f2-c6b048e4c818-kube-api-access-p9s2f\") pod \"7fd575cf-7cea-4577-a8f2-c6b048e4c818\" (UID: \"7fd575cf-7cea-4577-a8f2-c6b048e4c818\") " Dec 08 19:44:50 crc kubenswrapper[5118]: I1208 19:44:50.525402 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd575cf-7cea-4577-a8f2-c6b048e4c818-utilities\") pod \"7fd575cf-7cea-4577-a8f2-c6b048e4c818\" (UID: \"7fd575cf-7cea-4577-a8f2-c6b048e4c818\") " Dec 08 19:44:50 crc kubenswrapper[5118]: I1208 19:44:50.525480 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd575cf-7cea-4577-a8f2-c6b048e4c818-catalog-content\") pod \"7fd575cf-7cea-4577-a8f2-c6b048e4c818\" (UID: \"7fd575cf-7cea-4577-a8f2-c6b048e4c818\") " Dec 08 19:44:50 crc kubenswrapper[5118]: I1208 19:44:50.527607 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fd575cf-7cea-4577-a8f2-c6b048e4c818-utilities" (OuterVolumeSpecName: "utilities") pod "7fd575cf-7cea-4577-a8f2-c6b048e4c818" (UID: "7fd575cf-7cea-4577-a8f2-c6b048e4c818"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:44:50 crc kubenswrapper[5118]: I1208 19:44:50.534670 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fd575cf-7cea-4577-a8f2-c6b048e4c818-kube-api-access-p9s2f" (OuterVolumeSpecName: "kube-api-access-p9s2f") pod "7fd575cf-7cea-4577-a8f2-c6b048e4c818" (UID: "7fd575cf-7cea-4577-a8f2-c6b048e4c818"). InnerVolumeSpecName "kube-api-access-p9s2f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:44:50 crc kubenswrapper[5118]: I1208 19:44:50.618023 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fd575cf-7cea-4577-a8f2-c6b048e4c818-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fd575cf-7cea-4577-a8f2-c6b048e4c818" (UID: "7fd575cf-7cea-4577-a8f2-c6b048e4c818"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:44:50 crc kubenswrapper[5118]: I1208 19:44:50.627183 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd575cf-7cea-4577-a8f2-c6b048e4c818-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:44:50 crc kubenswrapper[5118]: I1208 19:44:50.627220 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p9s2f\" (UniqueName: \"kubernetes.io/projected/7fd575cf-7cea-4577-a8f2-c6b048e4c818-kube-api-access-p9s2f\") on node \"crc\" DevicePath \"\"" Dec 08 19:44:50 crc kubenswrapper[5118]: I1208 19:44:50.627235 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd575cf-7cea-4577-a8f2-c6b048e4c818-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:44:51 crc kubenswrapper[5118]: I1208 19:44:51.383327 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65nsv" Dec 08 19:44:51 crc kubenswrapper[5118]: I1208 19:44:51.414824 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-65nsv"] Dec 08 19:44:51 crc kubenswrapper[5118]: I1208 19:44:51.418634 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-65nsv"] Dec 08 19:44:52 crc kubenswrapper[5118]: I1208 19:44:52.102239 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fd575cf-7cea-4577-a8f2-c6b048e4c818" path="/var/lib/kubelet/pods/7fd575cf-7cea-4577-a8f2-c6b048e4c818/volumes" Dec 08 19:44:56 crc kubenswrapper[5118]: E1208 19:44:56.096552 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.160703 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420385-tqd7l"] Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.161769 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7fd575cf-7cea-4577-a8f2-c6b048e4c818" containerName="extract-utilities" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.161782 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd575cf-7cea-4577-a8f2-c6b048e4c818" containerName="extract-utilities" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.161793 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7fd575cf-7cea-4577-a8f2-c6b048e4c818" containerName="registry-server" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.161798 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd575cf-7cea-4577-a8f2-c6b048e4c818" containerName="registry-server" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.161824 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7fd575cf-7cea-4577-a8f2-c6b048e4c818" containerName="extract-content" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.161830 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd575cf-7cea-4577-a8f2-c6b048e4c818" containerName="extract-content" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.161930 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="7fd575cf-7cea-4577-a8f2-c6b048e4c818" containerName="registry-server" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.224198 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420385-tqd7l"] Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.224368 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-tqd7l" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.227047 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.227829 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.366224 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkhz8\" (UniqueName: \"kubernetes.io/projected/37f7c688-16a2-4d53-a050-f29c51c2ee87-kube-api-access-zkhz8\") pod \"collect-profiles-29420385-tqd7l\" (UID: \"37f7c688-16a2-4d53-a050-f29c51c2ee87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-tqd7l" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.366280 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37f7c688-16a2-4d53-a050-f29c51c2ee87-config-volume\") pod \"collect-profiles-29420385-tqd7l\" (UID: \"37f7c688-16a2-4d53-a050-f29c51c2ee87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-tqd7l" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.366318 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37f7c688-16a2-4d53-a050-f29c51c2ee87-secret-volume\") pod \"collect-profiles-29420385-tqd7l\" (UID: \"37f7c688-16a2-4d53-a050-f29c51c2ee87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-tqd7l" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.467720 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zkhz8\" (UniqueName: \"kubernetes.io/projected/37f7c688-16a2-4d53-a050-f29c51c2ee87-kube-api-access-zkhz8\") pod \"collect-profiles-29420385-tqd7l\" (UID: \"37f7c688-16a2-4d53-a050-f29c51c2ee87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-tqd7l" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.467780 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37f7c688-16a2-4d53-a050-f29c51c2ee87-config-volume\") pod \"collect-profiles-29420385-tqd7l\" (UID: \"37f7c688-16a2-4d53-a050-f29c51c2ee87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-tqd7l" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.467813 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37f7c688-16a2-4d53-a050-f29c51c2ee87-secret-volume\") pod \"collect-profiles-29420385-tqd7l\" (UID: \"37f7c688-16a2-4d53-a050-f29c51c2ee87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-tqd7l" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.469928 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37f7c688-16a2-4d53-a050-f29c51c2ee87-config-volume\") pod \"collect-profiles-29420385-tqd7l\" (UID: \"37f7c688-16a2-4d53-a050-f29c51c2ee87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-tqd7l" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.473412 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37f7c688-16a2-4d53-a050-f29c51c2ee87-secret-volume\") pod \"collect-profiles-29420385-tqd7l\" (UID: \"37f7c688-16a2-4d53-a050-f29c51c2ee87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-tqd7l" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.485589 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkhz8\" (UniqueName: \"kubernetes.io/projected/37f7c688-16a2-4d53-a050-f29c51c2ee87-kube-api-access-zkhz8\") pod \"collect-profiles-29420385-tqd7l\" (UID: \"37f7c688-16a2-4d53-a050-f29c51c2ee87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-tqd7l" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.540793 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-tqd7l" Dec 08 19:45:00 crc kubenswrapper[5118]: I1208 19:45:00.714426 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420385-tqd7l"] Dec 08 19:45:00 crc kubenswrapper[5118]: W1208 19:45:00.718734 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37f7c688_16a2_4d53_a050_f29c51c2ee87.slice/crio-dd862ce8259cd59cc4e9565a81518e8aa8cd93ffd042783bcca80e40d1b65a4a WatchSource:0}: Error finding container dd862ce8259cd59cc4e9565a81518e8aa8cd93ffd042783bcca80e40d1b65a4a: Status 404 returned error can't find the container with id dd862ce8259cd59cc4e9565a81518e8aa8cd93ffd042783bcca80e40d1b65a4a Dec 08 19:45:01 crc kubenswrapper[5118]: I1208 19:45:01.456750 5118 generic.go:358] "Generic (PLEG): container finished" podID="37f7c688-16a2-4d53-a050-f29c51c2ee87" containerID="e3acb0e8ee813cfd452420298f6590eca3fad01b51f5a2e7d669140ba2a97e15" exitCode=0 Dec 08 19:45:01 crc kubenswrapper[5118]: I1208 19:45:01.456877 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-tqd7l" event={"ID":"37f7c688-16a2-4d53-a050-f29c51c2ee87","Type":"ContainerDied","Data":"e3acb0e8ee813cfd452420298f6590eca3fad01b51f5a2e7d669140ba2a97e15"} Dec 08 19:45:01 crc kubenswrapper[5118]: I1208 19:45:01.456905 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-tqd7l" event={"ID":"37f7c688-16a2-4d53-a050-f29c51c2ee87","Type":"ContainerStarted","Data":"dd862ce8259cd59cc4e9565a81518e8aa8cd93ffd042783bcca80e40d1b65a4a"} Dec 08 19:45:02 crc kubenswrapper[5118]: I1208 19:45:02.775605 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-tqd7l" Dec 08 19:45:02 crc kubenswrapper[5118]: I1208 19:45:02.799121 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkhz8\" (UniqueName: \"kubernetes.io/projected/37f7c688-16a2-4d53-a050-f29c51c2ee87-kube-api-access-zkhz8\") pod \"37f7c688-16a2-4d53-a050-f29c51c2ee87\" (UID: \"37f7c688-16a2-4d53-a050-f29c51c2ee87\") " Dec 08 19:45:02 crc kubenswrapper[5118]: I1208 19:45:02.799158 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37f7c688-16a2-4d53-a050-f29c51c2ee87-config-volume\") pod \"37f7c688-16a2-4d53-a050-f29c51c2ee87\" (UID: \"37f7c688-16a2-4d53-a050-f29c51c2ee87\") " Dec 08 19:45:02 crc kubenswrapper[5118]: I1208 19:45:02.799194 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37f7c688-16a2-4d53-a050-f29c51c2ee87-secret-volume\") pod \"37f7c688-16a2-4d53-a050-f29c51c2ee87\" (UID: \"37f7c688-16a2-4d53-a050-f29c51c2ee87\") " Dec 08 19:45:02 crc kubenswrapper[5118]: I1208 19:45:02.800115 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37f7c688-16a2-4d53-a050-f29c51c2ee87-config-volume" (OuterVolumeSpecName: "config-volume") pod "37f7c688-16a2-4d53-a050-f29c51c2ee87" (UID: "37f7c688-16a2-4d53-a050-f29c51c2ee87"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:45:02 crc kubenswrapper[5118]: I1208 19:45:02.805858 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37f7c688-16a2-4d53-a050-f29c51c2ee87-kube-api-access-zkhz8" (OuterVolumeSpecName: "kube-api-access-zkhz8") pod "37f7c688-16a2-4d53-a050-f29c51c2ee87" (UID: "37f7c688-16a2-4d53-a050-f29c51c2ee87"). InnerVolumeSpecName "kube-api-access-zkhz8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:45:02 crc kubenswrapper[5118]: I1208 19:45:02.805850 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37f7c688-16a2-4d53-a050-f29c51c2ee87-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "37f7c688-16a2-4d53-a050-f29c51c2ee87" (UID: "37f7c688-16a2-4d53-a050-f29c51c2ee87"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:45:02 crc kubenswrapper[5118]: I1208 19:45:02.900525 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zkhz8\" (UniqueName: \"kubernetes.io/projected/37f7c688-16a2-4d53-a050-f29c51c2ee87-kube-api-access-zkhz8\") on node \"crc\" DevicePath \"\"" Dec 08 19:45:02 crc kubenswrapper[5118]: I1208 19:45:02.900567 5118 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37f7c688-16a2-4d53-a050-f29c51c2ee87-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:45:02 crc kubenswrapper[5118]: I1208 19:45:02.900582 5118 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37f7c688-16a2-4d53-a050-f29c51c2ee87-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:45:03 crc kubenswrapper[5118]: I1208 19:45:03.469342 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-tqd7l" event={"ID":"37f7c688-16a2-4d53-a050-f29c51c2ee87","Type":"ContainerDied","Data":"dd862ce8259cd59cc4e9565a81518e8aa8cd93ffd042783bcca80e40d1b65a4a"} Dec 08 19:45:03 crc kubenswrapper[5118]: I1208 19:45:03.469814 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd862ce8259cd59cc4e9565a81518e8aa8cd93ffd042783bcca80e40d1b65a4a" Dec 08 19:45:03 crc kubenswrapper[5118]: I1208 19:45:03.469389 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-tqd7l" Dec 08 19:45:09 crc kubenswrapper[5118]: I1208 19:45:09.467283 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:45:09 crc kubenswrapper[5118]: I1208 19:45:09.467847 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:45:11 crc kubenswrapper[5118]: E1208 19:45:11.097302 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:45:26 crc kubenswrapper[5118]: E1208 19:45:26.098175 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:45:39 crc kubenswrapper[5118]: I1208 19:45:39.467959 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:45:39 crc kubenswrapper[5118]: I1208 19:45:39.469005 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:45:41 crc kubenswrapper[5118]: E1208 19:45:41.097297 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:45:54 crc kubenswrapper[5118]: E1208 19:45:54.192165 5118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 19:45:54 crc kubenswrapper[5118]: E1208 19:45:54.192930 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6gkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-lmd4w_service-telemetry(5c9df676-377e-4cce-8389-95a81a2b54a0): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 19:45:54 crc kubenswrapper[5118]: E1208 19:45:54.195255 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:45:55 crc kubenswrapper[5118]: E1208 19:45:55.085883 5118 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 08 19:45:57 crc kubenswrapper[5118]: I1208 19:45:57.245780 5118 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 08 19:45:57 crc kubenswrapper[5118]: I1208 19:45:57.260906 5118 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 19:45:57 crc kubenswrapper[5118]: I1208 19:45:57.285330 5118 ???:1] "http: TLS handshake error from 192.168.126.11:54406: no serving certificate available for the kubelet" Dec 08 19:45:57 crc kubenswrapper[5118]: I1208 19:45:57.322070 5118 ???:1] "http: TLS handshake error from 192.168.126.11:54412: no serving certificate available for the kubelet" Dec 08 19:45:57 crc kubenswrapper[5118]: I1208 19:45:57.362191 5118 ???:1] "http: TLS handshake error from 192.168.126.11:54422: no serving certificate available for the kubelet" Dec 08 19:45:57 crc kubenswrapper[5118]: I1208 19:45:57.410521 5118 ???:1] "http: TLS handshake error from 192.168.126.11:54436: no serving certificate available for the kubelet" Dec 08 19:45:57 crc kubenswrapper[5118]: I1208 19:45:57.481008 5118 ???:1] "http: TLS handshake error from 192.168.126.11:54440: no serving certificate available for the kubelet" Dec 08 19:45:57 crc kubenswrapper[5118]: I1208 19:45:57.591399 5118 ???:1] "http: TLS handshake error from 192.168.126.11:33088: no serving certificate available for the kubelet" Dec 08 19:45:57 crc kubenswrapper[5118]: I1208 19:45:57.787047 5118 ???:1] "http: TLS handshake error from 192.168.126.11:33090: no serving certificate available for the kubelet" Dec 08 19:45:58 crc kubenswrapper[5118]: I1208 19:45:58.135409 5118 ???:1] "http: TLS handshake error from 192.168.126.11:33100: no serving certificate available for the kubelet" Dec 08 19:45:58 crc kubenswrapper[5118]: I1208 19:45:58.805521 5118 ???:1] "http: TLS handshake error from 192.168.126.11:33102: no serving certificate available for the kubelet" Dec 08 19:46:00 crc kubenswrapper[5118]: I1208 19:46:00.106226 5118 ???:1] "http: TLS handshake error from 192.168.126.11:33106: no serving certificate available for the kubelet" Dec 08 19:46:02 crc kubenswrapper[5118]: I1208 19:46:02.690100 5118 ???:1] "http: TLS handshake error from 192.168.126.11:33112: no serving certificate available for the kubelet" Dec 08 19:46:07 crc kubenswrapper[5118]: I1208 19:46:07.839288 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45006: no serving certificate available for the kubelet" Dec 08 19:46:08 crc kubenswrapper[5118]: E1208 19:46:08.110230 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:46:09 crc kubenswrapper[5118]: I1208 19:46:09.467826 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:46:09 crc kubenswrapper[5118]: I1208 19:46:09.468218 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:46:09 crc kubenswrapper[5118]: I1208 19:46:09.468285 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:46:09 crc kubenswrapper[5118]: I1208 19:46:09.469394 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ea19603e4d1cffaf24b8b70ad009aa68dd73babbc033205c3239717229c12e2"} pod="openshift-machine-config-operator/machine-config-daemon-twnt9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:46:09 crc kubenswrapper[5118]: I1208 19:46:09.469514 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" containerID="cri-o://5ea19603e4d1cffaf24b8b70ad009aa68dd73babbc033205c3239717229c12e2" gracePeriod=600 Dec 08 19:46:09 crc kubenswrapper[5118]: I1208 19:46:09.934248 5118 generic.go:358] "Generic (PLEG): container finished" podID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerID="5ea19603e4d1cffaf24b8b70ad009aa68dd73babbc033205c3239717229c12e2" exitCode=0 Dec 08 19:46:09 crc kubenswrapper[5118]: I1208 19:46:09.934320 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" event={"ID":"0052f7cb-2eab-42e7-8f98-b1544811d9c3","Type":"ContainerDied","Data":"5ea19603e4d1cffaf24b8b70ad009aa68dd73babbc033205c3239717229c12e2"} Dec 08 19:46:09 crc kubenswrapper[5118]: I1208 19:46:09.934573 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" event={"ID":"0052f7cb-2eab-42e7-8f98-b1544811d9c3","Type":"ContainerStarted","Data":"93dcaffa85c278ede5d1b5bf93e3bb1d6b957021bc8e30f5ff1bdaf695814b61"} Dec 08 19:46:09 crc kubenswrapper[5118]: I1208 19:46:09.934590 5118 scope.go:117] "RemoveContainer" containerID="d431454154fbcd4ebfcd3a345d3b257b49f1ea186ad3587cfb5ff74b16d0d0b8" Dec 08 19:46:18 crc kubenswrapper[5118]: I1208 19:46:18.106310 5118 ???:1] "http: TLS handshake error from 192.168.126.11:36542: no serving certificate available for the kubelet" Dec 08 19:46:19 crc kubenswrapper[5118]: E1208 19:46:19.097440 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:46:33 crc kubenswrapper[5118]: E1208 19:46:33.097282 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:46:38 crc kubenswrapper[5118]: I1208 19:46:38.611446 5118 ???:1] "http: TLS handshake error from 192.168.126.11:59716: no serving certificate available for the kubelet" Dec 08 19:46:45 crc kubenswrapper[5118]: E1208 19:46:45.097014 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:46:58 crc kubenswrapper[5118]: E1208 19:46:58.104408 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:47:12 crc kubenswrapper[5118]: E1208 19:47:12.096590 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:47:19 crc kubenswrapper[5118]: I1208 19:47:19.603948 5118 ???:1] "http: TLS handshake error from 192.168.126.11:36928: no serving certificate available for the kubelet" Dec 08 19:47:25 crc kubenswrapper[5118]: E1208 19:47:25.096909 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:47:36 crc kubenswrapper[5118]: E1208 19:47:36.097113 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:47:50 crc kubenswrapper[5118]: E1208 19:47:50.097054 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:48:01 crc kubenswrapper[5118]: I1208 19:48:01.622187 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-75w9c"] Dec 08 19:48:01 crc kubenswrapper[5118]: I1208 19:48:01.623949 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="37f7c688-16a2-4d53-a050-f29c51c2ee87" containerName="collect-profiles" Dec 08 19:48:01 crc kubenswrapper[5118]: I1208 19:48:01.623992 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="37f7c688-16a2-4d53-a050-f29c51c2ee87" containerName="collect-profiles" Dec 08 19:48:01 crc kubenswrapper[5118]: I1208 19:48:01.624213 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="37f7c688-16a2-4d53-a050-f29c51c2ee87" containerName="collect-profiles" Dec 08 19:48:01 crc kubenswrapper[5118]: I1208 19:48:01.634284 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-75w9c" Dec 08 19:48:01 crc kubenswrapper[5118]: I1208 19:48:01.638947 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-75w9c"] Dec 08 19:48:01 crc kubenswrapper[5118]: I1208 19:48:01.719670 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgwc4\" (UniqueName: \"kubernetes.io/projected/985fc5a3-6fa6-4691-9d36-e2cb03333fe0-kube-api-access-mgwc4\") pod \"infrawatch-operators-75w9c\" (UID: \"985fc5a3-6fa6-4691-9d36-e2cb03333fe0\") " pod="service-telemetry/infrawatch-operators-75w9c" Dec 08 19:48:01 crc kubenswrapper[5118]: I1208 19:48:01.820341 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mgwc4\" (UniqueName: \"kubernetes.io/projected/985fc5a3-6fa6-4691-9d36-e2cb03333fe0-kube-api-access-mgwc4\") pod \"infrawatch-operators-75w9c\" (UID: \"985fc5a3-6fa6-4691-9d36-e2cb03333fe0\") " pod="service-telemetry/infrawatch-operators-75w9c" Dec 08 19:48:01 crc kubenswrapper[5118]: I1208 19:48:01.838560 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgwc4\" (UniqueName: \"kubernetes.io/projected/985fc5a3-6fa6-4691-9d36-e2cb03333fe0-kube-api-access-mgwc4\") pod \"infrawatch-operators-75w9c\" (UID: \"985fc5a3-6fa6-4691-9d36-e2cb03333fe0\") " pod="service-telemetry/infrawatch-operators-75w9c" Dec 08 19:48:01 crc kubenswrapper[5118]: I1208 19:48:01.990113 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-75w9c" Dec 08 19:48:02 crc kubenswrapper[5118]: I1208 19:48:02.392525 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-75w9c"] Dec 08 19:48:02 crc kubenswrapper[5118]: W1208 19:48:02.405967 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod985fc5a3_6fa6_4691_9d36_e2cb03333fe0.slice/crio-d4f237a0228583d81ddf2a394083e5523cdef8af54a8fa294ac275a16b0704e0 WatchSource:0}: Error finding container d4f237a0228583d81ddf2a394083e5523cdef8af54a8fa294ac275a16b0704e0: Status 404 returned error can't find the container with id d4f237a0228583d81ddf2a394083e5523cdef8af54a8fa294ac275a16b0704e0 Dec 08 19:48:02 crc kubenswrapper[5118]: E1208 19:48:02.473076 5118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 19:48:02 crc kubenswrapper[5118]: E1208 19:48:02.473252 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mgwc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-75w9c_service-telemetry(985fc5a3-6fa6-4691-9d36-e2cb03333fe0): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 19:48:02 crc kubenswrapper[5118]: E1208 19:48:02.474406 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:48:02 crc kubenswrapper[5118]: I1208 19:48:02.689369 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-75w9c" event={"ID":"985fc5a3-6fa6-4691-9d36-e2cb03333fe0","Type":"ContainerStarted","Data":"d4f237a0228583d81ddf2a394083e5523cdef8af54a8fa294ac275a16b0704e0"} Dec 08 19:48:02 crc kubenswrapper[5118]: E1208 19:48:02.691293 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:48:03 crc kubenswrapper[5118]: E1208 19:48:03.695560 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:48:05 crc kubenswrapper[5118]: E1208 19:48:05.096792 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:48:09 crc kubenswrapper[5118]: I1208 19:48:09.468240 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:48:09 crc kubenswrapper[5118]: I1208 19:48:09.468609 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:48:18 crc kubenswrapper[5118]: E1208 19:48:18.105058 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:48:18 crc kubenswrapper[5118]: E1208 19:48:18.161280 5118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 19:48:18 crc kubenswrapper[5118]: E1208 19:48:18.161523 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mgwc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-75w9c_service-telemetry(985fc5a3-6fa6-4691-9d36-e2cb03333fe0): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 19:48:18 crc kubenswrapper[5118]: E1208 19:48:18.162752 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:48:32 crc kubenswrapper[5118]: E1208 19:48:32.097178 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:48:33 crc kubenswrapper[5118]: E1208 19:48:33.097830 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:48:39 crc kubenswrapper[5118]: I1208 19:48:39.468156 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:48:39 crc kubenswrapper[5118]: I1208 19:48:39.469593 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:48:41 crc kubenswrapper[5118]: I1208 19:48:41.553887 5118 ???:1] "http: TLS handshake error from 192.168.126.11:46584: no serving certificate available for the kubelet" Dec 08 19:48:45 crc kubenswrapper[5118]: E1208 19:48:45.181905 5118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 19:48:45 crc kubenswrapper[5118]: E1208 19:48:45.182428 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6gkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-lmd4w_service-telemetry(5c9df676-377e-4cce-8389-95a81a2b54a0): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 19:48:45 crc kubenswrapper[5118]: E1208 19:48:45.183654 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:48:46 crc kubenswrapper[5118]: E1208 19:48:46.158897 5118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 19:48:46 crc kubenswrapper[5118]: E1208 19:48:46.159077 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mgwc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-75w9c_service-telemetry(985fc5a3-6fa6-4691-9d36-e2cb03333fe0): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 19:48:46 crc kubenswrapper[5118]: E1208 19:48:46.160239 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:48:58 crc kubenswrapper[5118]: E1208 19:48:58.110212 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:49:02 crc kubenswrapper[5118]: E1208 19:49:02.097167 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:49:08 crc kubenswrapper[5118]: I1208 19:49:08.471988 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-j4b8g_1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742/kube-multus/0.log" Dec 08 19:49:08 crc kubenswrapper[5118]: I1208 19:49:08.472515 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-j4b8g_1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742/kube-multus/0.log" Dec 08 19:49:08 crc kubenswrapper[5118]: I1208 19:49:08.481419 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 19:49:08 crc kubenswrapper[5118]: I1208 19:49:08.481660 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 19:49:09 crc kubenswrapper[5118]: I1208 19:49:09.468346 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:49:09 crc kubenswrapper[5118]: I1208 19:49:09.468440 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:49:09 crc kubenswrapper[5118]: I1208 19:49:09.468499 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:49:09 crc kubenswrapper[5118]: I1208 19:49:09.469209 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"93dcaffa85c278ede5d1b5bf93e3bb1d6b957021bc8e30f5ff1bdaf695814b61"} pod="openshift-machine-config-operator/machine-config-daemon-twnt9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:49:09 crc kubenswrapper[5118]: I1208 19:49:09.469299 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" containerID="cri-o://93dcaffa85c278ede5d1b5bf93e3bb1d6b957021bc8e30f5ff1bdaf695814b61" gracePeriod=600 Dec 08 19:49:10 crc kubenswrapper[5118]: E1208 19:49:10.096962 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:49:10 crc kubenswrapper[5118]: I1208 19:49:10.173107 5118 generic.go:358] "Generic (PLEG): container finished" podID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerID="93dcaffa85c278ede5d1b5bf93e3bb1d6b957021bc8e30f5ff1bdaf695814b61" exitCode=0 Dec 08 19:49:10 crc kubenswrapper[5118]: I1208 19:49:10.173191 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" event={"ID":"0052f7cb-2eab-42e7-8f98-b1544811d9c3","Type":"ContainerDied","Data":"93dcaffa85c278ede5d1b5bf93e3bb1d6b957021bc8e30f5ff1bdaf695814b61"} Dec 08 19:49:10 crc kubenswrapper[5118]: I1208 19:49:10.173268 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" event={"ID":"0052f7cb-2eab-42e7-8f98-b1544811d9c3","Type":"ContainerStarted","Data":"11275eaf7b06e592dd99c44ac082b7de4fba7d2eb85504703a7ae6c56cad955b"} Dec 08 19:49:10 crc kubenswrapper[5118]: I1208 19:49:10.173299 5118 scope.go:117] "RemoveContainer" containerID="5ea19603e4d1cffaf24b8b70ad009aa68dd73babbc033205c3239717229c12e2" Dec 08 19:49:17 crc kubenswrapper[5118]: E1208 19:49:17.096899 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:49:25 crc kubenswrapper[5118]: E1208 19:49:25.097530 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:49:32 crc kubenswrapper[5118]: E1208 19:49:32.157264 5118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 19:49:32 crc kubenswrapper[5118]: E1208 19:49:32.158069 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mgwc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-75w9c_service-telemetry(985fc5a3-6fa6-4691-9d36-e2cb03333fe0): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 19:49:32 crc kubenswrapper[5118]: E1208 19:49:32.159224 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:49:36 crc kubenswrapper[5118]: I1208 19:49:36.096106 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 19:49:36 crc kubenswrapper[5118]: E1208 19:49:36.096867 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:49:44 crc kubenswrapper[5118]: E1208 19:49:44.097743 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:49:49 crc kubenswrapper[5118]: E1208 19:49:49.097915 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:49:56 crc kubenswrapper[5118]: E1208 19:49:56.109217 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:50:04 crc kubenswrapper[5118]: E1208 19:50:04.097065 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:50:11 crc kubenswrapper[5118]: E1208 19:50:11.098206 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:50:18 crc kubenswrapper[5118]: E1208 19:50:18.107659 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:50:22 crc kubenswrapper[5118]: E1208 19:50:22.096846 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:50:22 crc kubenswrapper[5118]: I1208 19:50:22.144087 5118 ???:1] "http: TLS handshake error from 192.168.126.11:52974: no serving certificate available for the kubelet" Dec 08 19:50:32 crc kubenswrapper[5118]: E1208 19:50:32.096387 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:50:37 crc kubenswrapper[5118]: E1208 19:50:37.096784 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:50:44 crc kubenswrapper[5118]: E1208 19:50:44.097305 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:50:52 crc kubenswrapper[5118]: E1208 19:50:52.097550 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:50:55 crc kubenswrapper[5118]: E1208 19:50:55.097532 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:51:05 crc kubenswrapper[5118]: E1208 19:51:05.154769 5118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 19:51:05 crc kubenswrapper[5118]: E1208 19:51:05.155379 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mgwc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-75w9c_service-telemetry(985fc5a3-6fa6-4691-9d36-e2cb03333fe0): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 19:51:05 crc kubenswrapper[5118]: E1208 19:51:05.156626 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:51:08 crc kubenswrapper[5118]: E1208 19:51:08.104040 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:51:09 crc kubenswrapper[5118]: I1208 19:51:09.468078 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:51:09 crc kubenswrapper[5118]: I1208 19:51:09.468422 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:51:16 crc kubenswrapper[5118]: I1208 19:51:16.639360 5118 scope.go:117] "RemoveContainer" containerID="5243b0183cfc8859208b438632acfccd2dcd08b873776a725f9872992775b91f" Dec 08 19:51:16 crc kubenswrapper[5118]: I1208 19:51:16.668882 5118 scope.go:117] "RemoveContainer" containerID="692d00eb59277023420ce513dba5a4d4dfba50c0c5d184f474d1886b7f630692" Dec 08 19:51:16 crc kubenswrapper[5118]: I1208 19:51:16.699042 5118 scope.go:117] "RemoveContainer" containerID="2458242a667f0368743ca44fddb6382e8b20f1662759d9de40bac2357d2c32b1" Dec 08 19:51:17 crc kubenswrapper[5118]: E1208 19:51:17.096780 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:51:22 crc kubenswrapper[5118]: E1208 19:51:22.097310 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:51:25 crc kubenswrapper[5118]: I1208 19:51:25.431986 5118 ???:1] "http: TLS handshake error from 192.168.126.11:38750: no serving certificate available for the kubelet" Dec 08 19:51:32 crc kubenswrapper[5118]: E1208 19:51:32.098790 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:51:37 crc kubenswrapper[5118]: E1208 19:51:37.096953 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:51:39 crc kubenswrapper[5118]: I1208 19:51:39.467490 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:51:39 crc kubenswrapper[5118]: I1208 19:51:39.468180 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:51:46 crc kubenswrapper[5118]: E1208 19:51:46.096915 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:51:51 crc kubenswrapper[5118]: E1208 19:51:51.096629 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:52:00 crc kubenswrapper[5118]: E1208 19:52:00.097000 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:52:04 crc kubenswrapper[5118]: E1208 19:52:04.096807 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:52:09 crc kubenswrapper[5118]: I1208 19:52:09.467314 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:52:09 crc kubenswrapper[5118]: I1208 19:52:09.467392 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:52:09 crc kubenswrapper[5118]: I1208 19:52:09.467440 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:52:09 crc kubenswrapper[5118]: I1208 19:52:09.468059 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"11275eaf7b06e592dd99c44ac082b7de4fba7d2eb85504703a7ae6c56cad955b"} pod="openshift-machine-config-operator/machine-config-daemon-twnt9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:52:09 crc kubenswrapper[5118]: I1208 19:52:09.468126 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" containerID="cri-o://11275eaf7b06e592dd99c44ac082b7de4fba7d2eb85504703a7ae6c56cad955b" gracePeriod=600 Dec 08 19:52:10 crc kubenswrapper[5118]: I1208 19:52:10.455656 5118 generic.go:358] "Generic (PLEG): container finished" podID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerID="11275eaf7b06e592dd99c44ac082b7de4fba7d2eb85504703a7ae6c56cad955b" exitCode=0 Dec 08 19:52:10 crc kubenswrapper[5118]: I1208 19:52:10.455807 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" event={"ID":"0052f7cb-2eab-42e7-8f98-b1544811d9c3","Type":"ContainerDied","Data":"11275eaf7b06e592dd99c44ac082b7de4fba7d2eb85504703a7ae6c56cad955b"} Dec 08 19:52:10 crc kubenswrapper[5118]: I1208 19:52:10.456754 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" event={"ID":"0052f7cb-2eab-42e7-8f98-b1544811d9c3","Type":"ContainerStarted","Data":"a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e"} Dec 08 19:52:10 crc kubenswrapper[5118]: I1208 19:52:10.456792 5118 scope.go:117] "RemoveContainer" containerID="93dcaffa85c278ede5d1b5bf93e3bb1d6b957021bc8e30f5ff1bdaf695814b61" Dec 08 19:52:15 crc kubenswrapper[5118]: E1208 19:52:15.097669 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:52:15 crc kubenswrapper[5118]: E1208 19:52:15.097761 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:52:26 crc kubenswrapper[5118]: E1208 19:52:26.097325 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:52:30 crc kubenswrapper[5118]: E1208 19:52:30.097970 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:52:39 crc kubenswrapper[5118]: E1208 19:52:39.097001 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:52:44 crc kubenswrapper[5118]: E1208 19:52:44.097514 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:52:51 crc kubenswrapper[5118]: E1208 19:52:51.098174 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:52:59 crc kubenswrapper[5118]: E1208 19:52:59.097939 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:53:04 crc kubenswrapper[5118]: E1208 19:53:04.097182 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:53:11 crc kubenswrapper[5118]: E1208 19:53:11.097817 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:53:17 crc kubenswrapper[5118]: E1208 19:53:17.096869 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:53:23 crc kubenswrapper[5118]: E1208 19:53:23.098354 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:53:32 crc kubenswrapper[5118]: E1208 19:53:32.096608 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:53:35 crc kubenswrapper[5118]: E1208 19:53:35.096454 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:53:46 crc kubenswrapper[5118]: E1208 19:53:46.152343 5118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 19:53:46 crc kubenswrapper[5118]: E1208 19:53:46.153250 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6gkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-lmd4w_service-telemetry(5c9df676-377e-4cce-8389-95a81a2b54a0): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 19:53:46 crc kubenswrapper[5118]: E1208 19:53:46.154540 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:53:48 crc kubenswrapper[5118]: E1208 19:53:48.171145 5118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 19:53:48 crc kubenswrapper[5118]: E1208 19:53:48.171758 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mgwc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-75w9c_service-telemetry(985fc5a3-6fa6-4691-9d36-e2cb03333fe0): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 19:53:48 crc kubenswrapper[5118]: E1208 19:53:48.173458 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:53:59 crc kubenswrapper[5118]: E1208 19:53:59.098266 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:54:02 crc kubenswrapper[5118]: E1208 19:54:02.097755 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:54:08 crc kubenswrapper[5118]: I1208 19:54:08.545444 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-j4b8g_1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742/kube-multus/0.log" Dec 08 19:54:08 crc kubenswrapper[5118]: I1208 19:54:08.546965 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-j4b8g_1e8e2a90-2e42-4cbc-b4e2-f011f5dd7742/kube-multus/0.log" Dec 08 19:54:08 crc kubenswrapper[5118]: I1208 19:54:08.552571 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 19:54:08 crc kubenswrapper[5118]: I1208 19:54:08.552740 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 19:54:09 crc kubenswrapper[5118]: I1208 19:54:09.024861 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7rt6r/must-gather-bfxbs"] Dec 08 19:54:09 crc kubenswrapper[5118]: I1208 19:54:09.046382 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7rt6r/must-gather-bfxbs"] Dec 08 19:54:09 crc kubenswrapper[5118]: I1208 19:54:09.046538 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7rt6r/must-gather-bfxbs" Dec 08 19:54:09 crc kubenswrapper[5118]: I1208 19:54:09.048306 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-7rt6r\"/\"default-dockercfg-r8vbk\"" Dec 08 19:54:09 crc kubenswrapper[5118]: I1208 19:54:09.052667 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-7rt6r\"/\"kube-root-ca.crt\"" Dec 08 19:54:09 crc kubenswrapper[5118]: I1208 19:54:09.057451 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-7rt6r\"/\"openshift-service-ca.crt\"" Dec 08 19:54:09 crc kubenswrapper[5118]: I1208 19:54:09.173042 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ttkz\" (UniqueName: \"kubernetes.io/projected/cf1db756-38be-4819-ad4f-c6e46901d905-kube-api-access-2ttkz\") pod \"must-gather-bfxbs\" (UID: \"cf1db756-38be-4819-ad4f-c6e46901d905\") " pod="openshift-must-gather-7rt6r/must-gather-bfxbs" Dec 08 19:54:09 crc kubenswrapper[5118]: I1208 19:54:09.173131 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf1db756-38be-4819-ad4f-c6e46901d905-must-gather-output\") pod \"must-gather-bfxbs\" (UID: \"cf1db756-38be-4819-ad4f-c6e46901d905\") " pod="openshift-must-gather-7rt6r/must-gather-bfxbs" Dec 08 19:54:09 crc kubenswrapper[5118]: I1208 19:54:09.274211 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf1db756-38be-4819-ad4f-c6e46901d905-must-gather-output\") pod \"must-gather-bfxbs\" (UID: \"cf1db756-38be-4819-ad4f-c6e46901d905\") " pod="openshift-must-gather-7rt6r/must-gather-bfxbs" Dec 08 19:54:09 crc kubenswrapper[5118]: I1208 19:54:09.274349 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2ttkz\" (UniqueName: \"kubernetes.io/projected/cf1db756-38be-4819-ad4f-c6e46901d905-kube-api-access-2ttkz\") pod \"must-gather-bfxbs\" (UID: \"cf1db756-38be-4819-ad4f-c6e46901d905\") " pod="openshift-must-gather-7rt6r/must-gather-bfxbs" Dec 08 19:54:09 crc kubenswrapper[5118]: I1208 19:54:09.274714 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf1db756-38be-4819-ad4f-c6e46901d905-must-gather-output\") pod \"must-gather-bfxbs\" (UID: \"cf1db756-38be-4819-ad4f-c6e46901d905\") " pod="openshift-must-gather-7rt6r/must-gather-bfxbs" Dec 08 19:54:09 crc kubenswrapper[5118]: I1208 19:54:09.301441 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ttkz\" (UniqueName: \"kubernetes.io/projected/cf1db756-38be-4819-ad4f-c6e46901d905-kube-api-access-2ttkz\") pod \"must-gather-bfxbs\" (UID: \"cf1db756-38be-4819-ad4f-c6e46901d905\") " pod="openshift-must-gather-7rt6r/must-gather-bfxbs" Dec 08 19:54:09 crc kubenswrapper[5118]: I1208 19:54:09.362907 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7rt6r/must-gather-bfxbs" Dec 08 19:54:09 crc kubenswrapper[5118]: I1208 19:54:09.468365 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:54:09 crc kubenswrapper[5118]: I1208 19:54:09.468675 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:54:09 crc kubenswrapper[5118]: I1208 19:54:09.770004 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7rt6r/must-gather-bfxbs"] Dec 08 19:54:09 crc kubenswrapper[5118]: W1208 19:54:09.773837 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf1db756_38be_4819_ad4f_c6e46901d905.slice/crio-1380502c9b03f48901e0bf149933bfdf2b14f4c49bd4b8bb7999a8038023630e WatchSource:0}: Error finding container 1380502c9b03f48901e0bf149933bfdf2b14f4c49bd4b8bb7999a8038023630e: Status 404 returned error can't find the container with id 1380502c9b03f48901e0bf149933bfdf2b14f4c49bd4b8bb7999a8038023630e Dec 08 19:54:10 crc kubenswrapper[5118]: E1208 19:54:10.096462 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:54:10 crc kubenswrapper[5118]: I1208 19:54:10.317148 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7rt6r/must-gather-bfxbs" event={"ID":"cf1db756-38be-4819-ad4f-c6e46901d905","Type":"ContainerStarted","Data":"1380502c9b03f48901e0bf149933bfdf2b14f4c49bd4b8bb7999a8038023630e"} Dec 08 19:54:16 crc kubenswrapper[5118]: E1208 19:54:16.096339 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:54:16 crc kubenswrapper[5118]: I1208 19:54:16.352648 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7rt6r/must-gather-bfxbs" event={"ID":"cf1db756-38be-4819-ad4f-c6e46901d905","Type":"ContainerStarted","Data":"3ea970b4b33cadfe973543a3f3931de8093994ac05b2083c8410c67de26c8e1e"} Dec 08 19:54:16 crc kubenswrapper[5118]: I1208 19:54:16.352934 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7rt6r/must-gather-bfxbs" event={"ID":"cf1db756-38be-4819-ad4f-c6e46901d905","Type":"ContainerStarted","Data":"e714f1bc9ba9835c043fada1fb72cd62069f8827359d2b4b8bb433f74f8ff40b"} Dec 08 19:54:16 crc kubenswrapper[5118]: I1208 19:54:16.373671 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7rt6r/must-gather-bfxbs" podStartSLOduration=2.515341325 podStartE2EDuration="8.373643198s" podCreationTimestamp="2025-12-08 19:54:08 +0000 UTC" firstStartedPulling="2025-12-08 19:54:09.775899141 +0000 UTC m=+1502.068744608" lastFinishedPulling="2025-12-08 19:54:15.634201024 +0000 UTC m=+1507.927046481" observedRunningTime="2025-12-08 19:54:16.368201909 +0000 UTC m=+1508.661047366" watchObservedRunningTime="2025-12-08 19:54:16.373643198 +0000 UTC m=+1508.666488655" Dec 08 19:54:18 crc kubenswrapper[5118]: I1208 19:54:18.995993 5118 ???:1] "http: TLS handshake error from 192.168.126.11:34526: no serving certificate available for the kubelet" Dec 08 19:54:23 crc kubenswrapper[5118]: E1208 19:54:23.096863 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:54:31 crc kubenswrapper[5118]: E1208 19:54:31.097010 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:54:38 crc kubenswrapper[5118]: I1208 19:54:38.101029 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 19:54:38 crc kubenswrapper[5118]: E1208 19:54:38.101646 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:54:39 crc kubenswrapper[5118]: I1208 19:54:39.467430 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:54:39 crc kubenswrapper[5118]: I1208 19:54:39.467745 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:54:46 crc kubenswrapper[5118]: E1208 19:54:46.097399 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:54:52 crc kubenswrapper[5118]: I1208 19:54:52.905777 5118 ???:1] "http: TLS handshake error from 192.168.126.11:50278: no serving certificate available for the kubelet" Dec 08 19:54:53 crc kubenswrapper[5118]: I1208 19:54:53.040081 5118 ???:1] "http: TLS handshake error from 192.168.126.11:50286: no serving certificate available for the kubelet" Dec 08 19:54:53 crc kubenswrapper[5118]: I1208 19:54:53.065783 5118 ???:1] "http: TLS handshake error from 192.168.126.11:50296: no serving certificate available for the kubelet" Dec 08 19:54:53 crc kubenswrapper[5118]: E1208 19:54:53.097438 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:55:00 crc kubenswrapper[5118]: E1208 19:55:00.096767 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:55:04 crc kubenswrapper[5118]: E1208 19:55:04.096677 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:55:04 crc kubenswrapper[5118]: I1208 19:55:04.511378 5118 ???:1] "http: TLS handshake error from 192.168.126.11:42954: no serving certificate available for the kubelet" Dec 08 19:55:04 crc kubenswrapper[5118]: I1208 19:55:04.667064 5118 ???:1] "http: TLS handshake error from 192.168.126.11:42960: no serving certificate available for the kubelet" Dec 08 19:55:04 crc kubenswrapper[5118]: I1208 19:55:04.719577 5118 ???:1] "http: TLS handshake error from 192.168.126.11:42976: no serving certificate available for the kubelet" Dec 08 19:55:09 crc kubenswrapper[5118]: I1208 19:55:09.468264 5118 patch_prober.go:28] interesting pod/machine-config-daemon-twnt9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:55:09 crc kubenswrapper[5118]: I1208 19:55:09.468621 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:55:09 crc kubenswrapper[5118]: I1208 19:55:09.468677 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" Dec 08 19:55:09 crc kubenswrapper[5118]: I1208 19:55:09.469305 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e"} pod="openshift-machine-config-operator/machine-config-daemon-twnt9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:55:09 crc kubenswrapper[5118]: I1208 19:55:09.469363 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerName="machine-config-daemon" containerID="cri-o://a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e" gracePeriod=600 Dec 08 19:55:09 crc kubenswrapper[5118]: E1208 19:55:09.606523 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3)\"" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" Dec 08 19:55:09 crc kubenswrapper[5118]: I1208 19:55:09.654556 5118 generic.go:358] "Generic (PLEG): container finished" podID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" containerID="a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e" exitCode=0 Dec 08 19:55:09 crc kubenswrapper[5118]: I1208 19:55:09.654629 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" event={"ID":"0052f7cb-2eab-42e7-8f98-b1544811d9c3","Type":"ContainerDied","Data":"a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e"} Dec 08 19:55:09 crc kubenswrapper[5118]: I1208 19:55:09.654708 5118 scope.go:117] "RemoveContainer" containerID="11275eaf7b06e592dd99c44ac082b7de4fba7d2eb85504703a7ae6c56cad955b" Dec 08 19:55:09 crc kubenswrapper[5118]: I1208 19:55:09.655123 5118 scope.go:117] "RemoveContainer" containerID="a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e" Dec 08 19:55:09 crc kubenswrapper[5118]: E1208 19:55:09.655375 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3)\"" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" Dec 08 19:55:13 crc kubenswrapper[5118]: E1208 19:55:13.097349 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:55:16 crc kubenswrapper[5118]: E1208 19:55:16.096994 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:55:19 crc kubenswrapper[5118]: I1208 19:55:19.123760 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53392: no serving certificate available for the kubelet" Dec 08 19:55:19 crc kubenswrapper[5118]: I1208 19:55:19.292766 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53394: no serving certificate available for the kubelet" Dec 08 19:55:19 crc kubenswrapper[5118]: I1208 19:55:19.296502 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53406: no serving certificate available for the kubelet" Dec 08 19:55:19 crc kubenswrapper[5118]: I1208 19:55:19.298133 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53408: no serving certificate available for the kubelet" Dec 08 19:55:19 crc kubenswrapper[5118]: I1208 19:55:19.436808 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53418: no serving certificate available for the kubelet" Dec 08 19:55:19 crc kubenswrapper[5118]: I1208 19:55:19.449096 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53424: no serving certificate available for the kubelet" Dec 08 19:55:19 crc kubenswrapper[5118]: I1208 19:55:19.482033 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53426: no serving certificate available for the kubelet" Dec 08 19:55:19 crc kubenswrapper[5118]: I1208 19:55:19.609008 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53440: no serving certificate available for the kubelet" Dec 08 19:55:19 crc kubenswrapper[5118]: I1208 19:55:19.755357 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53450: no serving certificate available for the kubelet" Dec 08 19:55:19 crc kubenswrapper[5118]: I1208 19:55:19.760036 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53452: no serving certificate available for the kubelet" Dec 08 19:55:19 crc kubenswrapper[5118]: I1208 19:55:19.784293 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53456: no serving certificate available for the kubelet" Dec 08 19:55:19 crc kubenswrapper[5118]: I1208 19:55:19.934314 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53472: no serving certificate available for the kubelet" Dec 08 19:55:19 crc kubenswrapper[5118]: I1208 19:55:19.946316 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53484: no serving certificate available for the kubelet" Dec 08 19:55:19 crc kubenswrapper[5118]: I1208 19:55:19.955277 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53494: no serving certificate available for the kubelet" Dec 08 19:55:20 crc kubenswrapper[5118]: I1208 19:55:20.092008 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53500: no serving certificate available for the kubelet" Dec 08 19:55:20 crc kubenswrapper[5118]: I1208 19:55:20.280248 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53502: no serving certificate available for the kubelet" Dec 08 19:55:20 crc kubenswrapper[5118]: I1208 19:55:20.283352 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53516: no serving certificate available for the kubelet" Dec 08 19:55:20 crc kubenswrapper[5118]: I1208 19:55:20.299506 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53520: no serving certificate available for the kubelet" Dec 08 19:55:20 crc kubenswrapper[5118]: I1208 19:55:20.430845 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53528: no serving certificate available for the kubelet" Dec 08 19:55:20 crc kubenswrapper[5118]: I1208 19:55:20.442857 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53538: no serving certificate available for the kubelet" Dec 08 19:55:20 crc kubenswrapper[5118]: I1208 19:55:20.467281 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53554: no serving certificate available for the kubelet" Dec 08 19:55:20 crc kubenswrapper[5118]: I1208 19:55:20.597440 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53570: no serving certificate available for the kubelet" Dec 08 19:55:20 crc kubenswrapper[5118]: I1208 19:55:20.784477 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53582: no serving certificate available for the kubelet" Dec 08 19:55:20 crc kubenswrapper[5118]: I1208 19:55:20.784857 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53580: no serving certificate available for the kubelet" Dec 08 19:55:20 crc kubenswrapper[5118]: I1208 19:55:20.792419 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53592: no serving certificate available for the kubelet" Dec 08 19:55:20 crc kubenswrapper[5118]: I1208 19:55:20.928453 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53594: no serving certificate available for the kubelet" Dec 08 19:55:20 crc kubenswrapper[5118]: I1208 19:55:20.970199 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53608: no serving certificate available for the kubelet" Dec 08 19:55:21 crc kubenswrapper[5118]: I1208 19:55:21.003388 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53624: no serving certificate available for the kubelet" Dec 08 19:55:21 crc kubenswrapper[5118]: I1208 19:55:21.088476 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53628: no serving certificate available for the kubelet" Dec 08 19:55:21 crc kubenswrapper[5118]: I1208 19:55:21.096480 5118 scope.go:117] "RemoveContainer" containerID="a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e" Dec 08 19:55:21 crc kubenswrapper[5118]: E1208 19:55:21.096767 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3)\"" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" Dec 08 19:55:21 crc kubenswrapper[5118]: I1208 19:55:21.250770 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53644: no serving certificate available for the kubelet" Dec 08 19:55:21 crc kubenswrapper[5118]: I1208 19:55:21.272474 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53646: no serving certificate available for the kubelet" Dec 08 19:55:21 crc kubenswrapper[5118]: I1208 19:55:21.279280 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53660: no serving certificate available for the kubelet" Dec 08 19:55:21 crc kubenswrapper[5118]: I1208 19:55:21.432122 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53676: no serving certificate available for the kubelet" Dec 08 19:55:21 crc kubenswrapper[5118]: I1208 19:55:21.432775 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53692: no serving certificate available for the kubelet" Dec 08 19:55:21 crc kubenswrapper[5118]: I1208 19:55:21.471667 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53706: no serving certificate available for the kubelet" Dec 08 19:55:21 crc kubenswrapper[5118]: I1208 19:55:21.614911 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53712: no serving certificate available for the kubelet" Dec 08 19:55:21 crc kubenswrapper[5118]: I1208 19:55:21.631284 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53716: no serving certificate available for the kubelet" Dec 08 19:55:21 crc kubenswrapper[5118]: I1208 19:55:21.790194 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53720: no serving certificate available for the kubelet" Dec 08 19:55:21 crc kubenswrapper[5118]: I1208 19:55:21.796678 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53736: no serving certificate available for the kubelet" Dec 08 19:55:21 crc kubenswrapper[5118]: I1208 19:55:21.801122 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53746: no serving certificate available for the kubelet" Dec 08 19:55:21 crc kubenswrapper[5118]: I1208 19:55:21.941676 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53752: no serving certificate available for the kubelet" Dec 08 19:55:21 crc kubenswrapper[5118]: I1208 19:55:21.979572 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53756: no serving certificate available for the kubelet" Dec 08 19:55:22 crc kubenswrapper[5118]: I1208 19:55:22.021389 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53766: no serving certificate available for the kubelet" Dec 08 19:55:25 crc kubenswrapper[5118]: E1208 19:55:25.096864 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:55:26 crc kubenswrapper[5118]: I1208 19:55:26.550785 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jw92g"] Dec 08 19:55:26 crc kubenswrapper[5118]: I1208 19:55:26.560780 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jw92g" Dec 08 19:55:26 crc kubenswrapper[5118]: I1208 19:55:26.564467 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jw92g"] Dec 08 19:55:26 crc kubenswrapper[5118]: I1208 19:55:26.603716 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg4jt\" (UniqueName: \"kubernetes.io/projected/0a281536-8b48-491e-9253-1a8d74a9b33e-kube-api-access-gg4jt\") pod \"community-operators-jw92g\" (UID: \"0a281536-8b48-491e-9253-1a8d74a9b33e\") " pod="openshift-marketplace/community-operators-jw92g" Dec 08 19:55:26 crc kubenswrapper[5118]: I1208 19:55:26.604290 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a281536-8b48-491e-9253-1a8d74a9b33e-utilities\") pod \"community-operators-jw92g\" (UID: \"0a281536-8b48-491e-9253-1a8d74a9b33e\") " pod="openshift-marketplace/community-operators-jw92g" Dec 08 19:55:26 crc kubenswrapper[5118]: I1208 19:55:26.604430 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a281536-8b48-491e-9253-1a8d74a9b33e-catalog-content\") pod \"community-operators-jw92g\" (UID: \"0a281536-8b48-491e-9253-1a8d74a9b33e\") " pod="openshift-marketplace/community-operators-jw92g" Dec 08 19:55:26 crc kubenswrapper[5118]: I1208 19:55:26.706197 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a281536-8b48-491e-9253-1a8d74a9b33e-utilities\") pod \"community-operators-jw92g\" (UID: \"0a281536-8b48-491e-9253-1a8d74a9b33e\") " pod="openshift-marketplace/community-operators-jw92g" Dec 08 19:55:26 crc kubenswrapper[5118]: I1208 19:55:26.706266 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a281536-8b48-491e-9253-1a8d74a9b33e-catalog-content\") pod \"community-operators-jw92g\" (UID: \"0a281536-8b48-491e-9253-1a8d74a9b33e\") " pod="openshift-marketplace/community-operators-jw92g" Dec 08 19:55:26 crc kubenswrapper[5118]: I1208 19:55:26.706321 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gg4jt\" (UniqueName: \"kubernetes.io/projected/0a281536-8b48-491e-9253-1a8d74a9b33e-kube-api-access-gg4jt\") pod \"community-operators-jw92g\" (UID: \"0a281536-8b48-491e-9253-1a8d74a9b33e\") " pod="openshift-marketplace/community-operators-jw92g" Dec 08 19:55:26 crc kubenswrapper[5118]: I1208 19:55:26.706847 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a281536-8b48-491e-9253-1a8d74a9b33e-utilities\") pod \"community-operators-jw92g\" (UID: \"0a281536-8b48-491e-9253-1a8d74a9b33e\") " pod="openshift-marketplace/community-operators-jw92g" Dec 08 19:55:26 crc kubenswrapper[5118]: I1208 19:55:26.706862 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a281536-8b48-491e-9253-1a8d74a9b33e-catalog-content\") pod \"community-operators-jw92g\" (UID: \"0a281536-8b48-491e-9253-1a8d74a9b33e\") " pod="openshift-marketplace/community-operators-jw92g" Dec 08 19:55:26 crc kubenswrapper[5118]: I1208 19:55:26.736750 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg4jt\" (UniqueName: \"kubernetes.io/projected/0a281536-8b48-491e-9253-1a8d74a9b33e-kube-api-access-gg4jt\") pod \"community-operators-jw92g\" (UID: \"0a281536-8b48-491e-9253-1a8d74a9b33e\") " pod="openshift-marketplace/community-operators-jw92g" Dec 08 19:55:26 crc kubenswrapper[5118]: I1208 19:55:26.885334 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jw92g" Dec 08 19:55:27 crc kubenswrapper[5118]: W1208 19:55:27.377811 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a281536_8b48_491e_9253_1a8d74a9b33e.slice/crio-7884a7c72caf0dde04d03965339350736425f82f65c199fe795f5f755bf0d231 WatchSource:0}: Error finding container 7884a7c72caf0dde04d03965339350736425f82f65c199fe795f5f755bf0d231: Status 404 returned error can't find the container with id 7884a7c72caf0dde04d03965339350736425f82f65c199fe795f5f755bf0d231 Dec 08 19:55:27 crc kubenswrapper[5118]: I1208 19:55:27.378263 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jw92g"] Dec 08 19:55:27 crc kubenswrapper[5118]: I1208 19:55:27.764336 5118 generic.go:358] "Generic (PLEG): container finished" podID="0a281536-8b48-491e-9253-1a8d74a9b33e" containerID="59b405edbff68d0412991b918574a76489970ad1d64b3ef0955cba705ccd174c" exitCode=0 Dec 08 19:55:27 crc kubenswrapper[5118]: I1208 19:55:27.764452 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jw92g" event={"ID":"0a281536-8b48-491e-9253-1a8d74a9b33e","Type":"ContainerDied","Data":"59b405edbff68d0412991b918574a76489970ad1d64b3ef0955cba705ccd174c"} Dec 08 19:55:27 crc kubenswrapper[5118]: I1208 19:55:27.764845 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jw92g" event={"ID":"0a281536-8b48-491e-9253-1a8d74a9b33e","Type":"ContainerStarted","Data":"7884a7c72caf0dde04d03965339350736425f82f65c199fe795f5f755bf0d231"} Dec 08 19:55:28 crc kubenswrapper[5118]: I1208 19:55:28.772015 5118 generic.go:358] "Generic (PLEG): container finished" podID="0a281536-8b48-491e-9253-1a8d74a9b33e" containerID="43a3802dad5b5022ef44ca327d98a18f47ac6cacf9efbbd718f0a4b6dda44541" exitCode=0 Dec 08 19:55:28 crc kubenswrapper[5118]: I1208 19:55:28.772122 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jw92g" event={"ID":"0a281536-8b48-491e-9253-1a8d74a9b33e","Type":"ContainerDied","Data":"43a3802dad5b5022ef44ca327d98a18f47ac6cacf9efbbd718f0a4b6dda44541"} Dec 08 19:55:29 crc kubenswrapper[5118]: I1208 19:55:29.780528 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jw92g" event={"ID":"0a281536-8b48-491e-9253-1a8d74a9b33e","Type":"ContainerStarted","Data":"1717ceb22e7d6d52b61d76927579778da9ad7337e5c2141d34c9397afaadbc6d"} Dec 08 19:55:29 crc kubenswrapper[5118]: I1208 19:55:29.802528 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jw92g" podStartSLOduration=3.168327421 podStartE2EDuration="3.802513387s" podCreationTimestamp="2025-12-08 19:55:26 +0000 UTC" firstStartedPulling="2025-12-08 19:55:27.765440437 +0000 UTC m=+1580.058285914" lastFinishedPulling="2025-12-08 19:55:28.399626423 +0000 UTC m=+1580.692471880" observedRunningTime="2025-12-08 19:55:29.798843928 +0000 UTC m=+1582.091689385" watchObservedRunningTime="2025-12-08 19:55:29.802513387 +0000 UTC m=+1582.095358844" Dec 08 19:55:30 crc kubenswrapper[5118]: E1208 19:55:30.096785 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:55:33 crc kubenswrapper[5118]: I1208 19:55:33.424260 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55894: no serving certificate available for the kubelet" Dec 08 19:55:33 crc kubenswrapper[5118]: I1208 19:55:33.593922 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55902: no serving certificate available for the kubelet" Dec 08 19:55:33 crc kubenswrapper[5118]: I1208 19:55:33.594532 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55910: no serving certificate available for the kubelet" Dec 08 19:55:33 crc kubenswrapper[5118]: I1208 19:55:33.766210 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55920: no serving certificate available for the kubelet" Dec 08 19:55:33 crc kubenswrapper[5118]: I1208 19:55:33.775437 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55930: no serving certificate available for the kubelet" Dec 08 19:55:36 crc kubenswrapper[5118]: I1208 19:55:36.096701 5118 scope.go:117] "RemoveContainer" containerID="a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e" Dec 08 19:55:36 crc kubenswrapper[5118]: E1208 19:55:36.097198 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3)\"" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" Dec 08 19:55:36 crc kubenswrapper[5118]: I1208 19:55:36.886399 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-jw92g" Dec 08 19:55:36 crc kubenswrapper[5118]: I1208 19:55:36.886461 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jw92g" Dec 08 19:55:36 crc kubenswrapper[5118]: I1208 19:55:36.925351 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jw92g" Dec 08 19:55:37 crc kubenswrapper[5118]: E1208 19:55:37.096573 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:55:37 crc kubenswrapper[5118]: I1208 19:55:37.904207 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jw92g" Dec 08 19:55:37 crc kubenswrapper[5118]: I1208 19:55:37.952148 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jw92g"] Dec 08 19:55:39 crc kubenswrapper[5118]: I1208 19:55:39.860938 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jw92g" podUID="0a281536-8b48-491e-9253-1a8d74a9b33e" containerName="registry-server" containerID="cri-o://1717ceb22e7d6d52b61d76927579778da9ad7337e5c2141d34c9397afaadbc6d" gracePeriod=2 Dec 08 19:55:40 crc kubenswrapper[5118]: I1208 19:55:40.796401 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jw92g" Dec 08 19:55:40 crc kubenswrapper[5118]: I1208 19:55:40.871335 5118 generic.go:358] "Generic (PLEG): container finished" podID="0a281536-8b48-491e-9253-1a8d74a9b33e" containerID="1717ceb22e7d6d52b61d76927579778da9ad7337e5c2141d34c9397afaadbc6d" exitCode=0 Dec 08 19:55:40 crc kubenswrapper[5118]: I1208 19:55:40.871411 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jw92g" event={"ID":"0a281536-8b48-491e-9253-1a8d74a9b33e","Type":"ContainerDied","Data":"1717ceb22e7d6d52b61d76927579778da9ad7337e5c2141d34c9397afaadbc6d"} Dec 08 19:55:40 crc kubenswrapper[5118]: I1208 19:55:40.871494 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jw92g" event={"ID":"0a281536-8b48-491e-9253-1a8d74a9b33e","Type":"ContainerDied","Data":"7884a7c72caf0dde04d03965339350736425f82f65c199fe795f5f755bf0d231"} Dec 08 19:55:40 crc kubenswrapper[5118]: I1208 19:55:40.871518 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jw92g" Dec 08 19:55:40 crc kubenswrapper[5118]: I1208 19:55:40.871532 5118 scope.go:117] "RemoveContainer" containerID="1717ceb22e7d6d52b61d76927579778da9ad7337e5c2141d34c9397afaadbc6d" Dec 08 19:55:40 crc kubenswrapper[5118]: I1208 19:55:40.895198 5118 scope.go:117] "RemoveContainer" containerID="43a3802dad5b5022ef44ca327d98a18f47ac6cacf9efbbd718f0a4b6dda44541" Dec 08 19:55:40 crc kubenswrapper[5118]: I1208 19:55:40.913963 5118 scope.go:117] "RemoveContainer" containerID="59b405edbff68d0412991b918574a76489970ad1d64b3ef0955cba705ccd174c" Dec 08 19:55:40 crc kubenswrapper[5118]: I1208 19:55:40.916611 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a281536-8b48-491e-9253-1a8d74a9b33e-catalog-content\") pod \"0a281536-8b48-491e-9253-1a8d74a9b33e\" (UID: \"0a281536-8b48-491e-9253-1a8d74a9b33e\") " Dec 08 19:55:40 crc kubenswrapper[5118]: I1208 19:55:40.916754 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg4jt\" (UniqueName: \"kubernetes.io/projected/0a281536-8b48-491e-9253-1a8d74a9b33e-kube-api-access-gg4jt\") pod \"0a281536-8b48-491e-9253-1a8d74a9b33e\" (UID: \"0a281536-8b48-491e-9253-1a8d74a9b33e\") " Dec 08 19:55:40 crc kubenswrapper[5118]: I1208 19:55:40.916815 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a281536-8b48-491e-9253-1a8d74a9b33e-utilities\") pod \"0a281536-8b48-491e-9253-1a8d74a9b33e\" (UID: \"0a281536-8b48-491e-9253-1a8d74a9b33e\") " Dec 08 19:55:40 crc kubenswrapper[5118]: I1208 19:55:40.919321 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a281536-8b48-491e-9253-1a8d74a9b33e-utilities" (OuterVolumeSpecName: "utilities") pod "0a281536-8b48-491e-9253-1a8d74a9b33e" (UID: "0a281536-8b48-491e-9253-1a8d74a9b33e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:55:40 crc kubenswrapper[5118]: I1208 19:55:40.924783 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a281536-8b48-491e-9253-1a8d74a9b33e-kube-api-access-gg4jt" (OuterVolumeSpecName: "kube-api-access-gg4jt") pod "0a281536-8b48-491e-9253-1a8d74a9b33e" (UID: "0a281536-8b48-491e-9253-1a8d74a9b33e"). InnerVolumeSpecName "kube-api-access-gg4jt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:55:40 crc kubenswrapper[5118]: I1208 19:55:40.951224 5118 scope.go:117] "RemoveContainer" containerID="1717ceb22e7d6d52b61d76927579778da9ad7337e5c2141d34c9397afaadbc6d" Dec 08 19:55:40 crc kubenswrapper[5118]: E1208 19:55:40.951588 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1717ceb22e7d6d52b61d76927579778da9ad7337e5c2141d34c9397afaadbc6d\": container with ID starting with 1717ceb22e7d6d52b61d76927579778da9ad7337e5c2141d34c9397afaadbc6d not found: ID does not exist" containerID="1717ceb22e7d6d52b61d76927579778da9ad7337e5c2141d34c9397afaadbc6d" Dec 08 19:55:40 crc kubenswrapper[5118]: I1208 19:55:40.951651 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1717ceb22e7d6d52b61d76927579778da9ad7337e5c2141d34c9397afaadbc6d"} err="failed to get container status \"1717ceb22e7d6d52b61d76927579778da9ad7337e5c2141d34c9397afaadbc6d\": rpc error: code = NotFound desc = could not find container \"1717ceb22e7d6d52b61d76927579778da9ad7337e5c2141d34c9397afaadbc6d\": container with ID starting with 1717ceb22e7d6d52b61d76927579778da9ad7337e5c2141d34c9397afaadbc6d not found: ID does not exist" Dec 08 19:55:40 crc kubenswrapper[5118]: I1208 19:55:40.951669 5118 scope.go:117] "RemoveContainer" containerID="43a3802dad5b5022ef44ca327d98a18f47ac6cacf9efbbd718f0a4b6dda44541" Dec 08 19:55:40 crc kubenswrapper[5118]: E1208 19:55:40.951968 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43a3802dad5b5022ef44ca327d98a18f47ac6cacf9efbbd718f0a4b6dda44541\": container with ID starting with 43a3802dad5b5022ef44ca327d98a18f47ac6cacf9efbbd718f0a4b6dda44541 not found: ID does not exist" containerID="43a3802dad5b5022ef44ca327d98a18f47ac6cacf9efbbd718f0a4b6dda44541" Dec 08 19:55:40 crc kubenswrapper[5118]: I1208 19:55:40.951993 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a3802dad5b5022ef44ca327d98a18f47ac6cacf9efbbd718f0a4b6dda44541"} err="failed to get container status \"43a3802dad5b5022ef44ca327d98a18f47ac6cacf9efbbd718f0a4b6dda44541\": rpc error: code = NotFound desc = could not find container \"43a3802dad5b5022ef44ca327d98a18f47ac6cacf9efbbd718f0a4b6dda44541\": container with ID starting with 43a3802dad5b5022ef44ca327d98a18f47ac6cacf9efbbd718f0a4b6dda44541 not found: ID does not exist" Dec 08 19:55:40 crc kubenswrapper[5118]: I1208 19:55:40.952011 5118 scope.go:117] "RemoveContainer" containerID="59b405edbff68d0412991b918574a76489970ad1d64b3ef0955cba705ccd174c" Dec 08 19:55:40 crc kubenswrapper[5118]: E1208 19:55:40.952284 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59b405edbff68d0412991b918574a76489970ad1d64b3ef0955cba705ccd174c\": container with ID starting with 59b405edbff68d0412991b918574a76489970ad1d64b3ef0955cba705ccd174c not found: ID does not exist" containerID="59b405edbff68d0412991b918574a76489970ad1d64b3ef0955cba705ccd174c" Dec 08 19:55:40 crc kubenswrapper[5118]: I1208 19:55:40.952307 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59b405edbff68d0412991b918574a76489970ad1d64b3ef0955cba705ccd174c"} err="failed to get container status \"59b405edbff68d0412991b918574a76489970ad1d64b3ef0955cba705ccd174c\": rpc error: code = NotFound desc = could not find container \"59b405edbff68d0412991b918574a76489970ad1d64b3ef0955cba705ccd174c\": container with ID starting with 59b405edbff68d0412991b918574a76489970ad1d64b3ef0955cba705ccd174c not found: ID does not exist" Dec 08 19:55:40 crc kubenswrapper[5118]: I1208 19:55:40.966916 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a281536-8b48-491e-9253-1a8d74a9b33e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a281536-8b48-491e-9253-1a8d74a9b33e" (UID: "0a281536-8b48-491e-9253-1a8d74a9b33e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:55:41 crc kubenswrapper[5118]: I1208 19:55:41.018919 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gg4jt\" (UniqueName: \"kubernetes.io/projected/0a281536-8b48-491e-9253-1a8d74a9b33e-kube-api-access-gg4jt\") on node \"crc\" DevicePath \"\"" Dec 08 19:55:41 crc kubenswrapper[5118]: I1208 19:55:41.018993 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a281536-8b48-491e-9253-1a8d74a9b33e-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:55:41 crc kubenswrapper[5118]: I1208 19:55:41.019012 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a281536-8b48-491e-9253-1a8d74a9b33e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:55:41 crc kubenswrapper[5118]: I1208 19:55:41.211381 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jw92g"] Dec 08 19:55:41 crc kubenswrapper[5118]: I1208 19:55:41.217404 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jw92g"] Dec 08 19:55:42 crc kubenswrapper[5118]: I1208 19:55:42.106751 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a281536-8b48-491e-9253-1a8d74a9b33e" path="/var/lib/kubelet/pods/0a281536-8b48-491e-9253-1a8d74a9b33e/volumes" Dec 08 19:55:44 crc kubenswrapper[5118]: E1208 19:55:44.098285 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:55:49 crc kubenswrapper[5118]: I1208 19:55:49.096163 5118 scope.go:117] "RemoveContainer" containerID="a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e" Dec 08 19:55:49 crc kubenswrapper[5118]: E1208 19:55:49.096712 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3)\"" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" Dec 08 19:55:51 crc kubenswrapper[5118]: E1208 19:55:51.097356 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:55:55 crc kubenswrapper[5118]: E1208 19:55:55.099096 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:56:02 crc kubenswrapper[5118]: I1208 19:56:02.096562 5118 scope.go:117] "RemoveContainer" containerID="a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e" Dec 08 19:56:02 crc kubenswrapper[5118]: E1208 19:56:02.097412 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3)\"" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" Dec 08 19:56:04 crc kubenswrapper[5118]: E1208 19:56:04.097469 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:56:07 crc kubenswrapper[5118]: E1208 19:56:07.097656 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:56:09 crc kubenswrapper[5118]: I1208 19:56:09.080394 5118 generic.go:358] "Generic (PLEG): container finished" podID="cf1db756-38be-4819-ad4f-c6e46901d905" containerID="e714f1bc9ba9835c043fada1fb72cd62069f8827359d2b4b8bb433f74f8ff40b" exitCode=0 Dec 08 19:56:09 crc kubenswrapper[5118]: I1208 19:56:09.080479 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7rt6r/must-gather-bfxbs" event={"ID":"cf1db756-38be-4819-ad4f-c6e46901d905","Type":"ContainerDied","Data":"e714f1bc9ba9835c043fada1fb72cd62069f8827359d2b4b8bb433f74f8ff40b"} Dec 08 19:56:09 crc kubenswrapper[5118]: I1208 19:56:09.081380 5118 scope.go:117] "RemoveContainer" containerID="e714f1bc9ba9835c043fada1fb72cd62069f8827359d2b4b8bb433f74f8ff40b" Dec 08 19:56:16 crc kubenswrapper[5118]: E1208 19:56:16.097319 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:56:17 crc kubenswrapper[5118]: I1208 19:56:17.096884 5118 scope.go:117] "RemoveContainer" containerID="a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e" Dec 08 19:56:17 crc kubenswrapper[5118]: E1208 19:56:17.097346 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3)\"" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" Dec 08 19:56:19 crc kubenswrapper[5118]: I1208 19:56:19.014129 5118 ???:1] "http: TLS handshake error from 192.168.126.11:38326: no serving certificate available for the kubelet" Dec 08 19:56:19 crc kubenswrapper[5118]: I1208 19:56:19.165406 5118 ???:1] "http: TLS handshake error from 192.168.126.11:38342: no serving certificate available for the kubelet" Dec 08 19:56:19 crc kubenswrapper[5118]: I1208 19:56:19.180415 5118 ???:1] "http: TLS handshake error from 192.168.126.11:38348: no serving certificate available for the kubelet" Dec 08 19:56:19 crc kubenswrapper[5118]: I1208 19:56:19.205565 5118 ???:1] "http: TLS handshake error from 192.168.126.11:38350: no serving certificate available for the kubelet" Dec 08 19:56:19 crc kubenswrapper[5118]: I1208 19:56:19.220596 5118 ???:1] "http: TLS handshake error from 192.168.126.11:38354: no serving certificate available for the kubelet" Dec 08 19:56:19 crc kubenswrapper[5118]: I1208 19:56:19.237618 5118 ???:1] "http: TLS handshake error from 192.168.126.11:38366: no serving certificate available for the kubelet" Dec 08 19:56:19 crc kubenswrapper[5118]: I1208 19:56:19.250684 5118 ???:1] "http: TLS handshake error from 192.168.126.11:38378: no serving certificate available for the kubelet" Dec 08 19:56:19 crc kubenswrapper[5118]: I1208 19:56:19.264277 5118 ???:1] "http: TLS handshake error from 192.168.126.11:38390: no serving certificate available for the kubelet" Dec 08 19:56:19 crc kubenswrapper[5118]: I1208 19:56:19.275342 5118 ???:1] "http: TLS handshake error from 192.168.126.11:38400: no serving certificate available for the kubelet" Dec 08 19:56:19 crc kubenswrapper[5118]: I1208 19:56:19.444722 5118 ???:1] "http: TLS handshake error from 192.168.126.11:38408: no serving certificate available for the kubelet" Dec 08 19:56:19 crc kubenswrapper[5118]: I1208 19:56:19.461728 5118 ???:1] "http: TLS handshake error from 192.168.126.11:38422: no serving certificate available for the kubelet" Dec 08 19:56:19 crc kubenswrapper[5118]: I1208 19:56:19.493255 5118 ???:1] "http: TLS handshake error from 192.168.126.11:38428: no serving certificate available for the kubelet" Dec 08 19:56:19 crc kubenswrapper[5118]: I1208 19:56:19.507180 5118 ???:1] "http: TLS handshake error from 192.168.126.11:38432: no serving certificate available for the kubelet" Dec 08 19:56:19 crc kubenswrapper[5118]: I1208 19:56:19.527217 5118 ???:1] "http: TLS handshake error from 192.168.126.11:38440: no serving certificate available for the kubelet" Dec 08 19:56:19 crc kubenswrapper[5118]: I1208 19:56:19.538819 5118 ???:1] "http: TLS handshake error from 192.168.126.11:38450: no serving certificate available for the kubelet" Dec 08 19:56:19 crc kubenswrapper[5118]: I1208 19:56:19.557278 5118 ???:1] "http: TLS handshake error from 192.168.126.11:38458: no serving certificate available for the kubelet" Dec 08 19:56:19 crc kubenswrapper[5118]: I1208 19:56:19.570946 5118 ???:1] "http: TLS handshake error from 192.168.126.11:38464: no serving certificate available for the kubelet" Dec 08 19:56:20 crc kubenswrapper[5118]: E1208 19:56:20.097816 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:56:24 crc kubenswrapper[5118]: I1208 19:56:24.613446 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7rt6r/must-gather-bfxbs"] Dec 08 19:56:24 crc kubenswrapper[5118]: I1208 19:56:24.614570 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-7rt6r/must-gather-bfxbs" podUID="cf1db756-38be-4819-ad4f-c6e46901d905" containerName="copy" containerID="cri-o://3ea970b4b33cadfe973543a3f3931de8093994ac05b2083c8410c67de26c8e1e" gracePeriod=2 Dec 08 19:56:24 crc kubenswrapper[5118]: I1208 19:56:24.621339 5118 status_manager.go:895] "Failed to get status for pod" podUID="cf1db756-38be-4819-ad4f-c6e46901d905" pod="openshift-must-gather-7rt6r/must-gather-bfxbs" err="pods \"must-gather-bfxbs\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-7rt6r\": no relationship found between node 'crc' and this object" Dec 08 19:56:24 crc kubenswrapper[5118]: I1208 19:56:24.623435 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7rt6r/must-gather-bfxbs"] Dec 08 19:56:24 crc kubenswrapper[5118]: I1208 19:56:24.996870 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7rt6r_must-gather-bfxbs_cf1db756-38be-4819-ad4f-c6e46901d905/copy/0.log" Dec 08 19:56:24 crc kubenswrapper[5118]: I1208 19:56:24.997502 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7rt6r/must-gather-bfxbs" Dec 08 19:56:24 crc kubenswrapper[5118]: I1208 19:56:24.999121 5118 status_manager.go:895] "Failed to get status for pod" podUID="cf1db756-38be-4819-ad4f-c6e46901d905" pod="openshift-must-gather-7rt6r/must-gather-bfxbs" err="pods \"must-gather-bfxbs\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-7rt6r\": no relationship found between node 'crc' and this object" Dec 08 19:56:25 crc kubenswrapper[5118]: I1208 19:56:25.072931 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ttkz\" (UniqueName: \"kubernetes.io/projected/cf1db756-38be-4819-ad4f-c6e46901d905-kube-api-access-2ttkz\") pod \"cf1db756-38be-4819-ad4f-c6e46901d905\" (UID: \"cf1db756-38be-4819-ad4f-c6e46901d905\") " Dec 08 19:56:25 crc kubenswrapper[5118]: I1208 19:56:25.073160 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf1db756-38be-4819-ad4f-c6e46901d905-must-gather-output\") pod \"cf1db756-38be-4819-ad4f-c6e46901d905\" (UID: \"cf1db756-38be-4819-ad4f-c6e46901d905\") " Dec 08 19:56:25 crc kubenswrapper[5118]: I1208 19:56:25.079424 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf1db756-38be-4819-ad4f-c6e46901d905-kube-api-access-2ttkz" (OuterVolumeSpecName: "kube-api-access-2ttkz") pod "cf1db756-38be-4819-ad4f-c6e46901d905" (UID: "cf1db756-38be-4819-ad4f-c6e46901d905"). InnerVolumeSpecName "kube-api-access-2ttkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:56:25 crc kubenswrapper[5118]: I1208 19:56:25.115426 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf1db756-38be-4819-ad4f-c6e46901d905-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "cf1db756-38be-4819-ad4f-c6e46901d905" (UID: "cf1db756-38be-4819-ad4f-c6e46901d905"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:56:25 crc kubenswrapper[5118]: I1208 19:56:25.174334 5118 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf1db756-38be-4819-ad4f-c6e46901d905-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 08 19:56:25 crc kubenswrapper[5118]: I1208 19:56:25.174367 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2ttkz\" (UniqueName: \"kubernetes.io/projected/cf1db756-38be-4819-ad4f-c6e46901d905-kube-api-access-2ttkz\") on node \"crc\" DevicePath \"\"" Dec 08 19:56:25 crc kubenswrapper[5118]: I1208 19:56:25.196481 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7rt6r_must-gather-bfxbs_cf1db756-38be-4819-ad4f-c6e46901d905/copy/0.log" Dec 08 19:56:25 crc kubenswrapper[5118]: I1208 19:56:25.196914 5118 generic.go:358] "Generic (PLEG): container finished" podID="cf1db756-38be-4819-ad4f-c6e46901d905" containerID="3ea970b4b33cadfe973543a3f3931de8093994ac05b2083c8410c67de26c8e1e" exitCode=143 Dec 08 19:56:25 crc kubenswrapper[5118]: I1208 19:56:25.196993 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7rt6r/must-gather-bfxbs" Dec 08 19:56:25 crc kubenswrapper[5118]: I1208 19:56:25.197034 5118 scope.go:117] "RemoveContainer" containerID="3ea970b4b33cadfe973543a3f3931de8093994ac05b2083c8410c67de26c8e1e" Dec 08 19:56:25 crc kubenswrapper[5118]: I1208 19:56:25.198274 5118 status_manager.go:895] "Failed to get status for pod" podUID="cf1db756-38be-4819-ad4f-c6e46901d905" pod="openshift-must-gather-7rt6r/must-gather-bfxbs" err="pods \"must-gather-bfxbs\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-7rt6r\": no relationship found between node 'crc' and this object" Dec 08 19:56:25 crc kubenswrapper[5118]: I1208 19:56:25.213040 5118 status_manager.go:895] "Failed to get status for pod" podUID="cf1db756-38be-4819-ad4f-c6e46901d905" pod="openshift-must-gather-7rt6r/must-gather-bfxbs" err="pods \"must-gather-bfxbs\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-7rt6r\": no relationship found between node 'crc' and this object" Dec 08 19:56:25 crc kubenswrapper[5118]: I1208 19:56:25.218583 5118 scope.go:117] "RemoveContainer" containerID="e714f1bc9ba9835c043fada1fb72cd62069f8827359d2b4b8bb433f74f8ff40b" Dec 08 19:56:25 crc kubenswrapper[5118]: I1208 19:56:25.285850 5118 scope.go:117] "RemoveContainer" containerID="3ea970b4b33cadfe973543a3f3931de8093994ac05b2083c8410c67de26c8e1e" Dec 08 19:56:25 crc kubenswrapper[5118]: E1208 19:56:25.286215 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ea970b4b33cadfe973543a3f3931de8093994ac05b2083c8410c67de26c8e1e\": container with ID starting with 3ea970b4b33cadfe973543a3f3931de8093994ac05b2083c8410c67de26c8e1e not found: ID does not exist" containerID="3ea970b4b33cadfe973543a3f3931de8093994ac05b2083c8410c67de26c8e1e" Dec 08 19:56:25 crc kubenswrapper[5118]: I1208 19:56:25.286245 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea970b4b33cadfe973543a3f3931de8093994ac05b2083c8410c67de26c8e1e"} err="failed to get container status \"3ea970b4b33cadfe973543a3f3931de8093994ac05b2083c8410c67de26c8e1e\": rpc error: code = NotFound desc = could not find container \"3ea970b4b33cadfe973543a3f3931de8093994ac05b2083c8410c67de26c8e1e\": container with ID starting with 3ea970b4b33cadfe973543a3f3931de8093994ac05b2083c8410c67de26c8e1e not found: ID does not exist" Dec 08 19:56:25 crc kubenswrapper[5118]: I1208 19:56:25.286267 5118 scope.go:117] "RemoveContainer" containerID="e714f1bc9ba9835c043fada1fb72cd62069f8827359d2b4b8bb433f74f8ff40b" Dec 08 19:56:25 crc kubenswrapper[5118]: E1208 19:56:25.286805 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e714f1bc9ba9835c043fada1fb72cd62069f8827359d2b4b8bb433f74f8ff40b\": container with ID starting with e714f1bc9ba9835c043fada1fb72cd62069f8827359d2b4b8bb433f74f8ff40b not found: ID does not exist" containerID="e714f1bc9ba9835c043fada1fb72cd62069f8827359d2b4b8bb433f74f8ff40b" Dec 08 19:56:25 crc kubenswrapper[5118]: I1208 19:56:25.286842 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e714f1bc9ba9835c043fada1fb72cd62069f8827359d2b4b8bb433f74f8ff40b"} err="failed to get container status \"e714f1bc9ba9835c043fada1fb72cd62069f8827359d2b4b8bb433f74f8ff40b\": rpc error: code = NotFound desc = could not find container \"e714f1bc9ba9835c043fada1fb72cd62069f8827359d2b4b8bb433f74f8ff40b\": container with ID starting with e714f1bc9ba9835c043fada1fb72cd62069f8827359d2b4b8bb433f74f8ff40b not found: ID does not exist" Dec 08 19:56:26 crc kubenswrapper[5118]: I1208 19:56:26.105896 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf1db756-38be-4819-ad4f-c6e46901d905" path="/var/lib/kubelet/pods/cf1db756-38be-4819-ad4f-c6e46901d905/volumes" Dec 08 19:56:29 crc kubenswrapper[5118]: E1208 19:56:29.096947 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:56:30 crc kubenswrapper[5118]: I1208 19:56:30.097044 5118 scope.go:117] "RemoveContainer" containerID="a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e" Dec 08 19:56:30 crc kubenswrapper[5118]: E1208 19:56:30.097617 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3)\"" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" Dec 08 19:56:34 crc kubenswrapper[5118]: E1208 19:56:34.097129 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:56:41 crc kubenswrapper[5118]: E1208 19:56:41.096681 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:56:43 crc kubenswrapper[5118]: I1208 19:56:43.096934 5118 scope.go:117] "RemoveContainer" containerID="a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e" Dec 08 19:56:43 crc kubenswrapper[5118]: E1208 19:56:43.097408 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3)\"" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" Dec 08 19:56:48 crc kubenswrapper[5118]: E1208 19:56:48.105587 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:56:53 crc kubenswrapper[5118]: I1208 19:56:53.148944 5118 ???:1] "http: TLS handshake error from 192.168.126.11:39232: no serving certificate available for the kubelet" Dec 08 19:56:55 crc kubenswrapper[5118]: I1208 19:56:55.096524 5118 scope.go:117] "RemoveContainer" containerID="a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e" Dec 08 19:56:55 crc kubenswrapper[5118]: E1208 19:56:55.096781 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3)\"" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" Dec 08 19:56:55 crc kubenswrapper[5118]: E1208 19:56:55.097452 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:56:59 crc kubenswrapper[5118]: E1208 19:56:59.097548 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:57:07 crc kubenswrapper[5118]: E1208 19:57:07.097847 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:57:08 crc kubenswrapper[5118]: I1208 19:57:08.108001 5118 scope.go:117] "RemoveContainer" containerID="a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e" Dec 08 19:57:08 crc kubenswrapper[5118]: E1208 19:57:08.108447 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3)\"" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" Dec 08 19:57:14 crc kubenswrapper[5118]: E1208 19:57:14.097970 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:57:19 crc kubenswrapper[5118]: I1208 19:57:19.097523 5118 scope.go:117] "RemoveContainer" containerID="a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e" Dec 08 19:57:19 crc kubenswrapper[5118]: E1208 19:57:19.098934 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3)\"" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" Dec 08 19:57:19 crc kubenswrapper[5118]: E1208 19:57:19.098933 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:57:26 crc kubenswrapper[5118]: E1208 19:57:26.097356 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:57:31 crc kubenswrapper[5118]: I1208 19:57:31.096749 5118 scope.go:117] "RemoveContainer" containerID="a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e" Dec 08 19:57:31 crc kubenswrapper[5118]: E1208 19:57:31.099995 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3)\"" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" Dec 08 19:57:31 crc kubenswrapper[5118]: E1208 19:57:31.097951 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:57:39 crc kubenswrapper[5118]: E1208 19:57:39.096799 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:57:43 crc kubenswrapper[5118]: E1208 19:57:43.097926 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:57:44 crc kubenswrapper[5118]: I1208 19:57:44.097187 5118 scope.go:117] "RemoveContainer" containerID="a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e" Dec 08 19:57:44 crc kubenswrapper[5118]: E1208 19:57:44.097925 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3)\"" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" Dec 08 19:57:53 crc kubenswrapper[5118]: E1208 19:57:53.096790 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:57:57 crc kubenswrapper[5118]: I1208 19:57:57.096018 5118 scope.go:117] "RemoveContainer" containerID="a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e" Dec 08 19:57:57 crc kubenswrapper[5118]: E1208 19:57:57.098182 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3)\"" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" Dec 08 19:57:58 crc kubenswrapper[5118]: E1208 19:57:58.114587 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:58:05 crc kubenswrapper[5118]: E1208 19:58:05.097479 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:58:09 crc kubenswrapper[5118]: I1208 19:58:09.096949 5118 scope.go:117] "RemoveContainer" containerID="a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e" Dec 08 19:58:09 crc kubenswrapper[5118]: E1208 19:58:09.097357 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3)\"" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" Dec 08 19:58:13 crc kubenswrapper[5118]: E1208 19:58:13.096921 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0" Dec 08 19:58:20 crc kubenswrapper[5118]: I1208 19:58:20.097393 5118 scope.go:117] "RemoveContainer" containerID="a80ba4d52b6f1854ef4d9afdfce27ff6dfbc3fa0a225d280aa16db710374434e" Dec 08 19:58:20 crc kubenswrapper[5118]: E1208 19:58:20.098469 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-twnt9_openshift-machine-config-operator(0052f7cb-2eab-42e7-8f98-b1544811d9c3)\"" pod="openshift-machine-config-operator/machine-config-daemon-twnt9" podUID="0052f7cb-2eab-42e7-8f98-b1544811d9c3" Dec 08 19:58:20 crc kubenswrapper[5118]: E1208 19:58:20.099147 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-lmd4w" podUID="5c9df676-377e-4cce-8389-95a81a2b54a0" Dec 08 19:58:24 crc kubenswrapper[5118]: E1208 19:58:24.097733 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-75w9c" podUID="985fc5a3-6fa6-4691-9d36-e2cb03333fe0"